I want to export data from DocumentDB compatible with MongoDB as source in account A to S3 as target in Account B in AWS.
Can I achieve this by vpc peering and what else do I have to do for cross account DMS from DocumentDB to S3
That kind of depends on where your replication instance lives.
If you place the replication instance in the same VPC as the DocumentDB, you won't even need VPC Peering.
Just set up the security groups to allow the replication instance to reach your DocumentDB and set it up as a Source Endpoint.
Assuming the replication instance has internet access or you've configured an S3 VPC endpoint, you can set up the S3-Bucket as a target. The role you configure when setting up the target, needs to have access to the S3 bucket.
Sample policy you can attach to the target role from the documentation:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::buckettest2/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::buckettest2"
]
}
]
}
The bucket in the target account needs to have a bucket policy that allows the same actions from the role in the source account. The policy will be almost identical, except that you also need to add the principal.
Related
My CI pipeline deposits intermediate artifacts in an S3 bucket that I don't want to be accessible to the public, but that does need to be accessible to myself and certain IP addresses.
Currently I have the following bucket permission policy in place:
{
"Version": "2012-10-17",
"Id": "CIBucketPolicy",
"Statement": [
{
"Sid": "IpAllowList",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::ci-bucket-name",
"arn:aws:s3:::ci-bucket-name/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip-address1",
"ip-address2",
"ip-address3"
]
}
}
}
]
}
One of the artifacts that my CI deposits here is a raw disk image, which is used in later stages to build an Amazon Machine Image using aws ec2 import-snapshot. The problem here is that the aws ec2 import-snapshot stage keeps failing with the error message:
ClientError: Disk validation failed [We do not have access to the given resource. Reason 403 Forbidden]
I'm fairly confident that something about my bucket permission policy is blocking ec2 from being able to access the bucket even though they're in the same availability zone, but I haven't figured out how to overcome this without simply removing the IP address condition.
Do I need to add something special to this policy for EC2 to have access? Perhaps I'm missing an S3 Action that EC2 needs to be able to perform in order to access the image file? Any advice would be greatly appreciated!
Iam creating a role and trying to attach an was managed policy for transit gateway full access.
But I am not able to find any policy with transit gateway.
There is no such AWS managed policy. So you can create your own customer managed policy. For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "FullTransitGatewayPermissions",
"Effect": "Allow",
"Action": [
"ec2:*TransitGateway*"
],
"Resource": "*"
}
]
}
Depending on exactly what you need, you can add more permissions or be more selective.
I'm restricting bucket access to my VPC Endpoints, I have a bucket say test-bucket, I have added the below policy to enable the access to be restricted to only through the VPC Endpoints:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access From Dev, QA Account",
"Effect": "Deny",
"NotPrincipal": {
"AWS": arn:aws:iam::x:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": [
"vpce-1234",
"vpce-1235"
]
}
}
}
This policy block console, awscli access to all users, provides only instances in the VPC to gain access to s3 bucket, i have a user group called D which consist of 40 users, I cannot add the group arn to principal as AWS doesn't support it, but it is tedious to add all the 40 users to the bucket policy. We are denying all traffic as we are making our objects Public, as this bucket is used as a yum repo and have to be available over https for the instances to download during a yum install/update. Kindly advice on how to give access using that users group D or is there any way around to provide users access ?
The group is not a principal which means you would be limited to the arn of the IAM user in this specific condition.
As a workaround you could create an IAM role that is able to be assumed either through the console or via the CLI. Then ensure that the S3 bucket policy specified the arn of the IAM role instead. Finally allow the users in the group to assume the IAM role.
How to give access to a IAM user to only access resources created by Elastic bean, i.e. The S3 bucket and the EC2 instances. The user should not be able to access any other S3 bucket or EC2 instance not created with Elastic Beanstalk.
The same policy should apply to EC2 instances created automatically via the Auto Scaling policy.
You can go with the "tags" approach. You can set elasticbeanstalk and autoscaling launch configuration to create instances with a predefined "tags" . And you can allow users to see only this tags.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": "*",
"Condition": {"StringEquals": {"ec2:ResourceTag/department": "dev"}}
}
]
}
There could be better approaches, we should wait and see other responses.
best,
I am looking to lock down an S3 bucket for security purposes - i'm storing deployment images in the bucket.
What I want to do is create a bucket policy that supports anonymous downloads over http only from EC2 instances in my account.
Is there a way to do this?
An example of a policy that I'm trying to use (it won't allow itself to be applied):
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[my bucket name]",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:ec2:us-east-1:[my account id]:instance/*"
}
}
}
]
}
Just to clarify how this is normally done. You create a IAM policy, attach it to a new or existing role, and decorate the ec2 instance with the role. You can also provide access through bucket policies, but that is less precise.
Details below:
S3 buckets are default deny except for my the owner. So you create your bucket and upload the data. You can verify with a browser that the files are not accessible by trying https://s3.amazonaws.com/MyBucketName/file.ext. Should come back with error code "Access Denied" in the xml. If you get an error code of "NoSuchBucket", you have the url wrong.
Create an IAM policy based on arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess. Starts out looking like the snip below. Take a look at the "Resource" key, and note that it is set to a wild card. You just modify this to be the arn of your bucket. You have to do one for the bucket and its contents so it becomes: "Resource": ["arn:aws:s3:::MyBucketName", "arn:aws:s3:::MyBucketName/*"]
Now that you have a policy, what you want to do is to decorate your instances with a IAM Role that automatically grants it this policy. All without any authentication keys having to be in the instance. So go to Role, create new role, make an Amazon EC2 role, find the policy you just created, and your Role is ready.
Finally you create your instance, and add the IAM role you just created. If the machine already has its own role, you just have to merge the two roles into a new one for the machine. If the machine is already running, it wont get the new role until you restart.
Now you should be good to go. The machine has the rights to access the s3 share. Now you can use the following command to copy files to your instance. Note you have to specify the region
aws s3 cp --region us-east-1 s3://MyBucketName/MyFileName.tgz /home/ubuntu
Please Note, the term "Security through obscurity" is only a thing in the movies. Either something is provably secure, or it is insecure.
I used something like
{
"Version": "2012-10-17",
"Id": "Allow only My VPC",
"Statement": [
{
"Sid": "Allow only My VPC",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject", "s3:ListBucket",
"Resource": [
"arn::s3:::{BUCKET_NAME}",
"arn::s3:::{BUCKET_NAME}/*"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "{VPC_ID}" OR "aws:sourceVpce": "{VPCe_ENDPOINT}"
}
}
}
]
}