I am not able to setup Cross Region Replication when the objects are server side encrypted. I am using awscli to set it up. This is what I have done.
Cross region replication role IAM policy looks like this:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Resource":[
"arn:aws:s3:::source-bucket"
]
},
{
"Effect":"Allow",
"Action":[
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTagging"
],
"Resource":[
"arn:aws:s3:::source-bucket/*"
]
},
{
"Effect":"Allow",
"Action":[
"s3:ReplicateObject",
"s3:ReplicateDelete",
"s3:ReplicateTags"
],
"Resource":"arn:aws:s3:::destination-bucket/*"
}
]
}
This is how my replication configuration file looks like:
{
"Role": "arn:aws:iam::1234567890:role/replication-role",
"Rules": [
{
"ID": "abcd",
"Prefix": "",
"Status": "Enabled",
"SourceSelectionCriteria": {
"SseKmsEncryptedObjects": {
"Status": "Enabled"
}
},
"Destination": {
"Bucket": "arn:aws:s3:::destinationbucket",
"EncryptionConfiguration": {
"ReplicaKmsKeyID": "arn:aws:kms:us-west-2:1234567890:key/849b779d-bdc3-4190-b285-6006657a578c"
}
}
}
]
}
This is how my cli command looks like:
aws s3api put-bucket-replication --bucket "sourcebucket" --replication-configuration file://./replicationconfigfile.json
When I go to S3 bucket after running the cli command, I can see the replication rule being created with KMS-Encrypted Object as replicate but when i click on edit to see the details, it does not have any KMS keys selected.
------Update-------
if i delete the rule created by cli and set it up using console, it selects all the kms keys in the wizard. So the question is why is it not selecting kms keys in source region when I am using cli?
what am I missing here?
KMS list field that is showed in the wizard is missing in the CLI, I have the same issue because I am using KMS to encrypt my origin and my destination bucket and I can't select the key to decrypt the objects in my origin bucket as I am using Terraform to created the replication rule.
As you can see here the only parameter that exists is "Replication criteria" and the value just can be true or false, the list field "Choose one or more keys for decrypting source objects" does not exist in the AWS CLI.
I already sent this issue to them.
What did I do?
I replaced my generated KMS key managed by me to use the key managed by AWS, I just enabled server-side encryption and I choose the AES256 encryption type in both bucket, origin and destination and it works fine to me.
Just in case anyone else runs into this issue, I had a long conversation with AWS support where they confirmed that there is no way to set the key for decrypting source objects programmatically (or in CloudFormation). In my case, I had to set up the configuration with the SDK and then manually set the decryption key in the console. Fairly annoying that they haven't fixed this as of 7/8/2020.
Looking around at a Terraform thread where they discuss this same issue, I believe they get around this by setting the IAM policy for CRR directly, but I'm unsure of exactly how you do that. https://github.com/terraform-providers/terraform-provider-aws/issues/6046#issuecomment-427960842
Related
I tried with simplest case following AWS documentation. I created role, assigned to instance and rebooted instance. To test access interactively, I logon to Windows instance and run aws s3api list-objects --bucket testbucket. I get error An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied.
Next test was to create .aws/credentials file and add profile to assume role. I modified role (assigned to instance) and added permission to assume role by any user in account. When I run same command aws s3api list-objects --bucket testbucket --profile assume_role, objects in bucket are listed.
Here is my test role Trust Relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"ssm.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
},
{
"Sid": "UserCanAssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "111111111111"
},
"Action": "sts:AssumeRole"
}
]
}
Role has only one permission "AmazonS3FullAccess".
When I switch role in AWS console, I can see content of S3 bucket (and no other action is allowed in AWS console).
My assumption is that EC2 instance does not assume role.
How to pinpoint where is the problem?
Problem was with Windows proxy.
I checked proxy environment variables. None was set. When I checked Control Panel->Internet options I saw that Proxy text box shows value of proxy, but checkbox "Use proxy" was not checked. Next to it was text "Some of your settings are managed by organization." Skip proxy was having 169.254.169.254 listed.
I run command in debug mode and saw that CLI connects to proxy. Which cannot access 169.254.169.254 and no credentials are set. When I explicitly set environment variable set NO_PROXY=169.254.169.254 everything started to work.
Why AWS CLI uses proxy from Windows system I do not understand. Worst of all, it uses proxy but does not check bypass proxy. Lesson learned. Run command in debug mode and verify output.
I would like to export snapshots from ElastiCache to S3. This is relatively easy to set up manually by following the documentation. However, I would like to do this programatically by using terraform.
In the documentation it states:
5. Choose Access Control List.
<snip />
8. Set the permissions on the bucket by choosing Yes for:
a. List objects
b. Write objects
c. Read bucket permissions
I haven't been able to find anywhere in the Terraform documentation where I can set these permissions on the bucket. I did find some documentation that maps the above permissions to IAM policies permissions and I applied these in the bucket policy. Unfortunately, I still get the following error:
An error occurred (InvalidParameterValue) when calling the CopySnapshot operation: Elasticache has not been granted ReadACP permissions on the S3 bucket my-backups
CODE
Setting up S3 Bucket:
data "aws_iam_policy_document" "my_backups" {
statement {
actions = [
"s3:GetBucketAcl",
"s3:ListBucket",
"s3:PutObject",
"s3:DeleteObject",
]
resources = [
"arn:aws:s3:::my-backups",
"arn:aws:s3:::my-backups/*",
]
principals {
type = "CanonicalUser"
identifiers = ["540804c33a284a299d2547575ce1010f2312ef3da9b3a053c8bc45bf233e4353"]
}
}
}
resource "aws_s3_bucket" "my_backups" {
bucket = "my-backups"
policy = "${data.aws_iam_policy_document.my_backups.json}"
}
GITHUB
I found two issues (with code that hasn't been merged) that relate to this:
Support bucket ACLs
Implementation of acl grants
I'm guessing this may not be possible until one of these two issues is resolved.
Based on the error it appears that you're ElasticCache principal, cannot read the buckets access control policy permissions. You need to add some additional permissions specifically based on this page https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups-exporting.html#backups-exporting-grant-access
{
"Statement":
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
}
"Version": "2012-10-17"
}
I’ve tested a variation of wide policy access , and got to the same point – the log groups is created, but the log stream isn’t.
Followed https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-configuring-cloudwatch-logs.html and the expected result is getting those messages in CloudWatch, but nothing's coming in.
The goal is to have audit and general MQ logs in CloudWatch.
Has anyone managed to stream MQ logs in CloudWatch? How could I further debug this?
I managed to create the Amazon MQ Broker with logging enabled, and publishing log messaged to Cloudwatch using terraform's provider 1.43.2 -- my project has a lock-down on an older tf provider version, so if you're using a newer one you should be fine
https://github.com/terraform-providers/terraform-provider-aws/blob/master/CHANGELOG.md#1430-november-07-2018
This was the policy that I didn't get right the first time, and needed for MQ to post to Cloudwatch:
data "aws_iam_policy_document" "mq-log-publishing-policy" {
statement {
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents",
]
resources = ["arn:aws:logs:*:*:log-group:/aws/amazonmq/*"]
principals {
identifiers = ["mq.amazonaws.com"]
type = "Service"
}
}
}
resource "aws_cloudwatch_log_resource_policy" "mq-log-publishing-policy" {
policy_document = "${data.aws_iam_policy_document.mq-log-publishing-policy.json}"
policy_name = "mq-log-publishing-policy"
}
Make sure this policy has been correctly applied, otherwise nothing will come up on Cloudwatch. I did so using aws cli:
aws --profile my-testing-profile-name --region my-profile-region logs describe-resource-policies
and you should see the policy in the output.
Or if you're using aws cli you can try
aws --region [your-region] logs put-resource-policy --policy-name AmazonMQ-logs \
--policy-document '{
"Statement": [
{
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Principal": {
"Service": "mq.amazonaws.com"
},
"Resource": "arn:aws:logs:*:*:log-group:/aws/amazonmq/*"
}
],
"Version": "2012-10-17"
}'
Install the AWS CLI agent for Windows and configure your credentials https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html
Create a JSON file in "C:\Users\YOUR-USER\" containing your policy. For example: C:\Users\YOUR-USER\policy.json. You can simply copy this one here and paste into your .json file:
{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Principal": {"Service": "mq.amazonaws.com"},"Action":["logs:CreateLogStream","logs:PutLogEvents"],"Resource" : "arn:aws:logs:*:*:log-group:/aws/amazonmq/*"}]}
Open your CMD and simply type:
aws --region eu-central-1 logs put-resource-policy --policy-name amazonmq_to_cloudwatch --policy-document file://policy.json
Well Done ! This will create a AWS RESOURCE POLICY, which sometimes is not possible to create in the IAM console.
When running the AWS (Amazon Web Services) import-image task:
aws ec2 import-image --description "My OVA" --disk-containers file://c:\TEMP\containers.json
I get the following error:
An error occurred (InvalidParameter) when calling the ImportImage operation: User does not have access to the S3 object.(mys3bucket/vms/myOVA.ova)
I followed all of the instructions in this AWS document on importing a VM (including Steps 1, 2, and 3). Specifically, I setup a vmimport role and the recommended policies for the role. What am I doing wrong?
I finally figured this out. The problem was my IAM user, that contains the vmimport role, did not have access to my S3 bucket. Once I granted my IAM user access to my S3 bucket (by setting a bucket policy in S3), the import-image command kicked off the process successfully.
To set the bucket policy in S3, right-click on your bucket (i.e. the top level bucket name in S3), then click "Properties". Then from the right-hand menu that gets displayed, open "Permissions", and click "Add bucket policy". A small window will come up where you can put in JSON for a policy. Here is the one that worked for me:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1476979061000",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::MY-AWS-account-ID:user/myIAMuserID"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mys3bucket",
"arn:aws:s3:::mys3bucket/*"
]
}
]}
You'll need to replace "MY-AWS-account-ID" with your AWS Account ID, and "myIAMuserID" with your IAM user ID that contains the vmimport role.
This document talks about how to get your AWS Account ID. And this document talks more about granting permissions in S3.
I had the same error message except the message did not specify the name of the s3 object. In my containers.json file I had to use the bucket name instead of the full arn. As in instead of arn:aws:s3:::mybucketname I just use mybucketname.
this works...
[
{
"Description": "VM Simulator",
"Format": "vmdk",
"UserBucket": {
"S3Bucket": "mybucketname",
"S3Key": "vmdisks/vmSimulator.vmdk"
}
}
]
this fails...
[
{
"Description": "Qtof Simulator",
"Format": "vmdk",
"UserBucket": {
"S3Bucket": "arn:aws:s3:::mybucketname",
"S3Key": "vmdisks/vmSimulator.vmdk"
}
}
]
with the message...
An error occurred (InvalidParameter) when calling the ImportImage operation: User does not have access to the S3 object
I managed to get this error by attempting to import a corrupted (in my case empty) .vmdk file. It was completely misleading, suggesting that it was an access control issue when in reality it was a file parsing issue. I suppose internally they have a try/catch and throw the error User does not have access to the S3 object when anything goes wrong trying to access the file. Earlier, I also got this error when the file straight up didn't exist. Just posting this for anyone who gets stuck like me, this error can mean that the file doesn't exist, isn't accessible, or can't be processed correctly.
One interesting note, in my specific case if they included the file path in parenthesis following the error, it meant file didn't exist. If there is no parenthesis following the error, it meant it was a file parsing issue.
In my case it was exactly what it is saying int the trouble shooting docs https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-troubleshooting.html. My aws cli was logged in to a different region then the bucket was created in.
I had the same problem, in my case was that I have multiple accounts for the same user, and the AWS configuration I made the wrong setup, so that account does not exist in this bucket.
I want to connect my AWS S3 with my AWS Lambda. I created my s3 bucket and named it xyz. While creating an event source on my AWS Lambda function, it is showing the following error
There was an error creating the event source mapping: Your bucket must be in the same region as the function.
While going through this link, I found out that I needed to setup a event notification for the s3 bucket for the AWS Lambda function. But I am unable to setup event notification for the s3 bucket as it is not showing settings for an AWS lambda function in the events tab of the s3 bucket's properties.
My Policy document for the IAM role I created for Lambda is as follows
{
"Version": "VersionNumber",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::xyz/*"
]
}
]
}
Can somebody let me know why I am unable to create an event for AWS Lambda for an operation on s3 bucket?
Thanks to John's comment, I was able to resolve this issue.
This problem occurs when (clearly stated by the error message) Lambda and S3 buckets are residing in different regions.
For creating lambda in the same region as that of s3 bucket, you need to know bucket's region.
To view the region of an Amazon S3 bucket, click on the bucket in the management console, then go to the Properties tab. The region will be displayed
Now that you know your target region. You can just switch to that region, in aws console, by selecting a region from the dropdown selection menu on top right corner just before Support menu.
Once you change your region to that of s3 bucket, creating a new lambda function will solve the issue.