I am creating a bucket programmatically as follows:
String bucketName = UUID.randomUUID().toString();
List<Acl> aclList = new ArrayList<>();
if (gcsBucketEntity.isPublic()) {
Acl publicAccessAcl = Acl.newBuilder(Acl.User.ofAllUsers(), Acl.Role.READER).build();
aclList.add(publicAccessAcl);
}
BucketInfo bucketInfo = BucketInfo
.newBuilder(bucketName)
.setLocation(gcsBucketEntity.getLocation()) // Multi-regions
.setStorageClass(valueOfStrict(gcsBucketEntity.getStorageType().toString()))
.setAcl(aclList)
.build();
Bucket bucket = this.storage.create(bucketInfo);
I have also tried to set a BucketTargetOption instead:
Storage.BucketTargetOption bucketTargetOption = Storage.BucketTargetOption
.predefinedAcl(Storage.PredefinedAcl.PUBLIC_READ);
Bucket bucket = this.storage.create(bucketInfo, bucketTargetOption);
with the exact same result.
The bucket is created and in the GCP console I can see that the access is public.
However, I am not able to access any files and I get a AccessDenied error instead:
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.</Details>
</Error>
If I create the bucker manually I think I have to add a Storage Object Viewer role to the user allUsers:
This is the only difference I can see between the manually and automatically created bucket so my question is..
How do I add this permission programmatically?
There is actually an example in the docs.
Apparently we have to create the bucket first and set the IAM-policy afterwards.
BucketInfo bucketInfo = BucketInfo
.newBuilder(bucketName)
.setLocation(gcsBucketEntity.getLocation()) // Multi-regions
.setStorageClass(valueOfStrict(gcsBucketEntity.getStorageType().toString()))
.build();
Bucket bucket = this.storage.create(bucketInfo);
if (gcsBucketEntity.isPublic()) {
Policy policy = this.storage.getIamPolicy(bucketName);
this.storage.setIamPolicy(
bucket.getName(),
policy.toBuilder()
.addIdentity(StorageRoles.objectViewer(), Identity.allUsers())
.build()
);
}
This is a bit odd imho because if something goes wrong I might end up with a "broken" bucket.
Anyway, the above code works for me.
Related
I want to create publicly accessible Google Cloud Bucket with uniform_bucket_level_access enabled using terraform. All of the examples on provider's docs which are for public bucket does not contain this setting.
When I try to use:
resource "google_storage_bucket_access_control" "public_rule" {
bucket = google_storage_bucket.a_bucket.name
role = "READER"
entity = "allUsers"
}
resource "google_storage_bucket" "a_bucket" {
name = <name>
location = <region>
project = var.project_id
storage_class = "STANDARD"
uniform_bucket_level_access = true
versioning {
enabled = false
}
}
I get the following error:
Error: Error creating BucketAccessControl: googleapi: Error 400: Cannot use ACL API to update bucket policy when uniform bucket-level access is enabled. Read more at https://cloud.google.com/storage/docs/uniform-bucket-level-access, invalid
If I remove the line for uniform access everything works as expected.
Do I have to use google_storage_bucket_iam resource for achieving this ?
You will have to use google_storage_bucket_iam. I like to use the member one so I don't accidentally clobber other IAM bindings, but you can use whatever your needs dictate.
resource "google_storage_bucket_iam_member" "member" {
bucket = google_storage_bucket.a_bucket.name
role = "roles/storage.objectViewer"
member = "allUsers"
}
EDIT: Use this instead of the google_storage_bucket_access_controls resource that you have.
I have the following s3 bucket defined:
module "bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.1.0"
bucket = local.test-bucket-name
acl = null
grant = [{
type = "CanonicalUser"
permission = "FULL_CONTROL"
id = data.aws_canonical_user_id.current.id
}, {
type = "CanonicalUser"
permission = "FULL_CONTROL"
id = data.aws_cloudfront_log_delivery_canonical_user_id.cloudfront.id
}
]
object_ownership = "BucketOwnerPreferred"
}
But when I try to terraform apply this, I get the error:
Error: error updating S3 bucket ACL (logs,private): MissingSecurityHeader: Your request was missing a required header status code: 400
This error message is not very specific. Am I missing some type of header?
I came across the same issue.
I was trying to update an ACL on a bucket which had previously had private set as the ACL and then modifying my terraform code to match manually created entries on the ACL that someone had done via the GUI.
To get it working for me, I removed one of the ACL entries from the S3 bucket manually of which I was trying to add to the bucket and then re-ran the terraform and it worked without an error
I see the same error in cloudtrail also.
Its like you cant set private acl to null without adding an ACL entry
From the aws lambda I want to list objects inside s3 bucket. When testing the function locally I'm getting access denied error.
public async Task<string> FunctionHandler(string input, ILambdaContext context)
{
var secretKey = "***";
var uid = "***";
var bucketName = "my-bucket-name";
AmazonS3Client s3Client = new AmazonS3Client(uid, secretKey);
ListObjectsRequest listObjectsRequest = new ListObjectsRequest();
listObjectsRequest.BucketName = bucketName;
var listObjectsResponse = await s3Client.ListObjectsAsync(listObjectsRequest);
// exception is thrown
...
}
Amazon.S3.AmazonS3Exception: Access Denied at
Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleExceptionStream(IRequestContext
requestContext, IWebResponseData httpErrorResponse,
HttpErrorResponseException exception, Stream responseStream) at
Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleExceptionAsync(IExecutionContext
executionContext, HttpErrorResponseException exception) ....
The bucket I'm using in this example "my-bucket-name" is Publicly accessible and it has
Any idea?
First of all, IAM policies are a preferred way how to control access to S3 buckets.
For S3 permissions it is always very important to distinguish between bucket level actions and object level actions and also - who is calling that action. In your code I can see that you do use ListObjects, which is a bucket level action, so that is OK.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
What did catch my I is the following:
var secretKey = "***";
var uid = "***";
var bucketName = "my-bucket-name";
AmazonS3Client s3Client = new AmazonS3Client(uid, secretKey);
That means that you are using for your access an AWS role. But even in your screenshot you can see that "Authenticated users group (anyone with an AWS account)" does not have any permissions assigned.
If you already have a role I would suggest to give the read-bucket permissions to that particular role (user) via an IAM policy. But adding read ACL to your AWS users should help as well.
I have 3 folders in a S3 bucket and AWS Transfer User which has access to one folder in that bucket which is setup via terraform :
resource "aws_transfer_user" "foo" {
server_id = aws_transfer_server.foo.id
user_name = "tftestuser"
role = aws_iam_role.foo.arn
home_directory_type = "LOGICAL"
home_directory_mappings {
entry = "/test.pdf"
target = "/bucket3/test-path/folder1"
//target = "/bucket3/test-path/folder2" --> Something like this accessing folder1 and folder2
}
}
Now I want it to have access to 2nd folder as well. Is it possible to add another folder to the user or I'll have to create a new aws transfer user ?
Try defining multiple home_directory_mappings because terraform accepts multiple items in certain cases like ordered_cache_behavior in aws_cloudfront_distribution
I've created a user in IAM, and attached 2 managed policies: AmazonS3FullAccess and AdministratorAccess. I would like to upload files to an S3 bucket called "pscfront".
I am using the following code to do the upload:
AWSCredentials credentials = new BasicAWSCredentials(Constants.AmazonWebServices.AccessKey, Constants.AmazonWebServices.SecretKey);
using (var client = new AmazonS3Client(credentials, RegionEndpoint.USEast1))
{
var loc = client.GetBucketLocation("s3.amazonaws.com/pscfront");
var tu = new TransferUtility(client);
tu.Upload(filename, Constants.AmazonWebServices.BucketName, keyName);
}
This fails with the exception "AccessDenied" (inner exception "The remote server returned an error: (403) Forbidden.") at the call to GetBucketLocation, or at the tu.Upload call if I comment that line out.
Any idea what gives?
smdh
Nothing wrong with the persmissions -- I was setting the bucket name incorrectly.You just pass the plan bucket name -- "pscfront" in this case.