AWS / Ansible - grab an S3 item cross-account - amazon-web-services

I have an ec2 instance in us-east-1. The instance is trying to grab an item from an s3 bucket that is in us-west-1, but still within the same account. The IAM permissions of the instance profile / IAM role attached to the instance has the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:GetObject",
"kms:Decrypt"
],
"Resource": [
"arn:aws:s3:::cross-region-bucket/",
"arn:aws:s3:::cross-region-bucket/*"
],
"Effect": "Allow"
}
]
}
I've also tried attaching admin access but still no luck. I am using ansible to grab an s3 object from this cross-region bucket:
- name: Download object from cross-region s3 bucket
aws_s3:
bucket: "cross-region-bucket"
object: "object.txt"
dest: "/local/user/object.txt"
mode: get
region: "us-west-1"
This seems to work just fine when the bucket is in the same region, but now that I am trying this cross region s3 get, I am getting the following error:
fatal: [X.X.X.X]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"X.X.X.X\". Make sure this host can be reached over ssh: ", "unreachable": true}
If I set the region in the aws_s3 task to 'us-east-1' (where the instance is), then I get the following error:
fatal: [X.X.X.X]: FAILED! => {"boto3_version": "1.18.19", "botocore_version": "1.21.19", "changed": false, "msg": "Failed while looking up bucket (during bucket_check) cross-region-bucket.: Connect timeout on endpoint URL: \"https://cross-region-bucket.s3.us-west-1.amazonaws.com/\""}
Not sure what is blocking me from accessing the cross-region bucket at this point. Any suggestions?

Related

Using AWS federated identity from Github Action to assume role in another AWS account

I have 2 AWS accounts, source and destination. AWS OIDC federation is configured so that I can use the token from Github Action to login to AWS using the official action:
- name: configure aws credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: arn:aws:iam::<source account id>:role/role-A
aws-region: us-east-1
Now, I want to use Terraform to manage resources on both accounts, source and destination. I followed a guide to allow role-A to assume role-B in the destination account.
I am getting a weird error:
Error: 022-08-31T09:28:04.246] [ERROR] default -
Error: error configuring Terraform AWS Provider: IAM Role (arn:aws:iam::<destination account id>:role/role-B) cannot be assumed.
There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
Error: operation error STS: AssumeRole, https response error StatusCode: 403, RequestID: 7227d0e8-e955-45ac-8f08-fc48699564e3, api error AccessDenied: User: arn:aws:sts::<source account id>:assumed-role/role-A/GitHubActions is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::<destination account id>:role/role-B
with provider["registry.terraform.io/hashicorp/aws"].security,
on cdk.tf.json line 4125, in provider.aws[1]:
4125: }
I can't figure out what the problem is. Role A has the following IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<destination account id>:role/role-B"
}
]
}
I'll appreciate help figuring out why the request is failing... It seems like everything is configured correctly...
Edit:
Added the trust policy for Role B:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<source account id>:role/role-A"
},
"Action": "sts:AssumeRole"
}
]
}

AWS IAM - S3: "Error putting S3 server side encryption configuration: AccessDenied" even when I am the Administrator

I am the admin of my AWS account arn:aws:iam::aws:policy/AdministratorAccess policy assigned to my, that gives permissions for all actions on all resources.
I am terraforming an S3 bucket that looks like this:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my_bucket"
acl = "log-delivery-write"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
versioning {
enabled = true
}
}
but when I apply the plan I am getting: Error: error putting S3 server side encryption configuration: AccessDenied: Access Denied
That is a weird error concerning I am the admin.
Getting the same error in the console:
You don't have permissions to edit default encryption
After you or your AWS administrator have updated your permissions to allow the s3:PutEncryptionConfiguration action, choose Save changes.
That is not true. The arn:aws:iam::aws:policy/AdministratorAccess policy has the following JSON:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
Any ideas what is going on?
P.S: I could successfully run the same HCL in another playground account with the same access. It seems I cannot on the one I want to deploy it, which makes zero sense.

AWS Rekognition gives an InvalidS3Exeption error

Every time I run the command
aws rekognition detect-labels --image "S3Object={Bucket=BucketName,Name=picture.jpg}" --region us-east-1
I get this error.
InvalidS3ObjectException: An error occurred (InvalidS3ObjectException) when calling the DetectLabels operation: Unable to get image metadata from S3. Check object key, region and/or access permissions.
I am trying to retrieve labels for a project I am working on but I can't seem to get past this step. I configured aws with my access key, secret key, us-east-1 region, and json as my output format.
I have also tried the code below and I receive the exact same error (I correctly Replaced BucketName with the name of my bucket.)
import boto3
BUCKET = "BucketName"
KEY = "picture.jpg"
def detect_labels(bucket, key, max_labels=10, min_confidence=90, region="eu-west-1"):
rekognition = boto3.client("rekognition", region)
response = rekognition.detect_labels(
Image={
"S3Object": {
"Bucket": bucket,
"Name": key,
}
},
MaxLabels=max_labels,
MinConfidence=min_confidence,
)
return response['Labels']
for label in detect_labels(BUCKET, KEY):
print "{Name} - {Confidence}%".format(**label)
I am able to see on my user account that it is calling Rekognition.
Image showing it being called from IAM.
It seems like the issue is somewhere with my S3 bucket but I haven't found out what.
Region of S3 and Rekognition should be the same for stability reasons.
More info: https://forums.aws.amazon.com/thread.jspa?threadID=243999
Kindly check your IAM Role Policies/Permissions, Also check the same for the role created for the lambda function. It's better to verify the policy using IAM Policy Checker.
I am facing a similar issue, This might due to the Permissions and Policy attached with the IAM Roles and with S3 Bucket. Need to check the metadata as well for the objects in S3 bucket.
My S3 bucket Policy:
{
"Version": "2012-10-17",
"Id": "Policy1547200240036",
"Statement": [
{
"Sid": "Stmt1547200205482",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::459983601504:user/veral"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::esp32-rekognition-459983601504/*"
}
]
}
Cross-origin resource sharing (CORS):
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"GET",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
If you use Server Side encryption for the bucket via KMS, remember to also have/give access to IAM role to decrypt using the KMS

Terraform ELB S3 Permissions Issue

I am having an issue using Terraform (v0.9.2) adding services to an ELB (I'm using: https://github.com/segmentio/stack/blob/master/s3-logs/main.tf).
When I run terraform apply I get this error:
* module.solr.module.elb.aws_elb.main: 1 error(s) occurred:
* aws_elb.main: Failure configuring ELB attributes:
InvalidConfigurationRequest: Access Denied for bucket: my-service-
logs. Please check S3bucket permission
status code: 409, request id: xxxxxxxxxx-xxxx-xxxx-xxxxxxxxx
My service looks like this:
module "solr" {
source = "github.com/segmentio/stack/service"
name = "${var.prefix}-${terraform.env}-solr"
environment = "${terraform.env}"
image = "123456789876.dkr.ecr.eu-west-2.amazonaws.com/my-docker-image"
subnet_ids = "${element(split(",", module.vpc_subnets.private_subnets_id), 3)}"
security_groups = "${module.security.apache_solr_group}"
port = "8983"
cluster = "${module.ecs-cluster.name}"
log_bucket = "${module.s3_logs.id}"
iam_role = "${aws_iam_instance_profile.ecs.id}"
dns_name = ""
zone_id = "${var.route53_zone_id}"
}
My s3-logs bucket looks like this:
module "s3_logs" {
source = "github.com/segmentio/stack/s3-logs"
name = "${var.prefix}"
environment = "${terraform.env}"
account_id = "123456789876"
}
I checked in S3 and the bucket policy looks like this:
{
"Version": "2012-10-17",
"Id": "log-bucket-policy",
"Statement": [
{
"Sid": "log-bucket-policy",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789876:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-service-logs/*"
}
]
}
As far as I can see ELB should have access to the S3 bucket to store the logs (it's running in the same AWS account).
The bucket and the ELB are all in eu-west-2.
Any ideas on what the problem could be would be much appreciated.
The docs for ELB access logs say that you want to allow a specific Amazon account to be able to write to S3, not your account.
As such you want something like:
{
"Id": "Policy1429136655940",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1429136633762",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-loadbalancer-logs/my-app/AWSLogs/123456789012/*",
"Principal": {
"AWS": [
"652711504416"
]
}
}
]
}
In Terraform you can use the aws_elb_service_account data source to automatically fetch the account ID used for writing logs as can be seen in the example in the docs:
data "aws_elb_service_account" "main" {}
resource "aws_s3_bucket" "elb_logs" {
bucket = "my-elb-tf-test-bucket"
acl = "private"
policy = <<POLICY
{
"Id": "Policy",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-tf-test-bucket/AWSLogs/*",
"Principal": {
"AWS": [
"${data.aws_elb_service_account.main.arn}"
]
}
}
]
}
POLICY
}
resource "aws_elb" "bar" {
name = "my-foobar-terraform-elb"
availability_zones = ["us-west-2a"]
access_logs {
bucket = "${aws_s3_bucket.elb_logs.bucket}"
interval = 5
}
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
Even when having everything by the docs, I kept getting the "Access Denied for bucket" error. Removing the encryption from the bucket worked for me.
In the bucket policy, the account number must be NOT yours. Instead it belongs to AWS, and for each region, the account numbers you should use in your bucket policy are listed at: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy
For instance, for us-east-1 region the account number is 127311923021.
Although the question is about Terraform, I post CloudFormation snippet created a bucket for ELB's access logs its Bucket policy:
MyAccessLogsBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
MyAllowELBAccessBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref MyAccessLogsBucket
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
AWS: "arn:aws:iam::127311923021:root"
Action:
- "s3:PutObject"
Resource: !Sub "arn:aws:s3:::${MyAccessLogsBucket}/AWSLogs/*"
In the principle, 127311923021 is used as this is AWS account number which should be used for account number in us-east-1.
Bucket permissions
When you enable access logging, you must specify an S3 bucket for the access logs. The bucket must meet the following requirements.
Requirements
The bucket must be located in the same Region as the load balancer.
Amazon S3-Managed Encryption Keys (SSE-S3) is required. No other encryption options are supported.
The bucket must have a bucket policy that grants Elastic Load Balancing permission to write the access logs to your bucket. Bucket policies are a collection of JSON statements written in the access policy language to define access permissions for your bucket. Each statement includes information about a single permission and contains a series of elements.
Use one of the following options to prepare an S3 bucket for access logging.
Amazon S3-Managed Encryption Keys (SSE-S3) is required. No other encryption options are supported.
So AWS docu says KMS is not supported...
In my case, it was the request_payer option set to Requester. Need to set to BucketOwner to work.

S3 putObject not working access denied

AWS S3 is working on my localhost and on my live website, but my development server (which is EXACTLY the same configuration) is throwing the following error: http://xxx.xx.xxx.xxx/latest/meta-data/iam/security-credentials/resulted in a404 Not Found` response: Error retrieving credentials from the instance profile metadata server.
Localhost URL is http://localhost/example
Live URL is http://www.example.com
Development URL is http://dev.example.com
Why would this work on localhost and live but not my development server?
Here is my sample code:
$bucket = 'example'
$s3Client = new S3Client([
'region'=>'us-west-2',
'version'=>'2006-03-01',
'key'=>'xxxxxxxxxxxxxxxxxxxxxxxx',
'secret'=>'xxxxxxxxxxxxxxxxxxxxxxxx',
]);
$uniqueFileName = uniqid().'.txt';
$s3Client->putObject([
'Bucket' => $bucket,
'Key' => 'dev/'.$uniqueFileName,
'Body' => 'this is the body!'
]);
Here is policy:
{
"Version": "2012-10-17",
"Id": "Policyxxxxxxxxx",
"Statement": [
{
"Sid": "Stmtxxxxxxxxx",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example/*"
}
]
}
The values returned by http://169.254.169.254/latest/meta-data/iam/security-credentials/ are associated with the Role assigned to the EC2 instance when the instance was first launched.
Since you are receiving a 404 Not Found response, it is likely that your Development server does not have a Role assigned. You can check in the EC2 management console -- just click on the instance, then look at the details pane and find the Role value.
If you wish to launch a new server that is "fully" identical, use the Launch More Like This command in the Actions menu. it will also copy the Role setting.