I have tried everything but couldn't get any clue what's wrong with my IAM policy to do with Cognito sub with identity ID access
I am using Lambda to get authentication details > get_object from a folder separated by Cognito user using boto3.
Here's my Lambda code:
import json
import urllib.parse
import boto3
import sys
import hmac, hashlib, base64
print('Loading function')
cognito = boto3.client('cognito-idp')
cognito_identity = boto3.client('cognito-identity')
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
username = '{substitute_with_my_own_data}' //authenticated user
app_client_id = '{substitute_with_my_own_data}' //cognito client id
key = '{substitute_with_my_own_data}' //cognito app client secret key
cognito_provider = 'cognito-idp.{region}.amazonaws.com/{cognito-pool-id}'
message = bytes(username+app_client_id,'utf-8')
key = bytes(key,'utf-8')
secret_hash = base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode()
print("SECRET HASH:",secret_hash)
auth_data = { 'USERNAME': username, 'PASSWORD':'{substitute_user_password}', 'SECRET_HASH': secret_hash}
auth_response = cognito.initiate_auth(
AuthFlow='USER_PASSWORD_AUTH',
AuthParameters=auth_data,
ClientId=app_client_id
)
print(auth_response)
# From the response that contains the assumed role, get the temporary
# credentials that can be used to make subsequent API calls
auth_result=auth_response['AuthenticationResult']
id_token=auth_result['IdToken']
id_response = cognito_identity.get_id(
IdentityPoolId='{sub_cognito_identity_pool_id}',
Logins={cognito_provider: id_token}
)
print('id_response = ' + id_response['IdentityId']) // up to this stage verified correct user cognito identity id returned
credentials_response = cognito_identity.get_credentials_for_identity(
IdentityId=id_response['IdentityId'],
Logins={cognito_provider: id_token}
)
secretKey = credentials_response['Credentials']['SecretKey']
accessKey = credentials_response['Credentials']['AccessKeyId']
sessionToken = credentials_response['Credentials']['SessionToken']
print('secretKey = ' + secretKey)
print('accessKey = ' + accessKey)
print('sessionToken = ' + sessionToken)
# Use the temporary credentials that AssumeRole returns to make a
# connection to Amazon S3
s3 = boto3.client(
's3',
aws_access_key_id=accessKey,
aws_secret_access_key=secretKey,
aws_session_token=sessionToken,
)
# Use the Amazon S3 resource object that is now configured with the
# credentials to access your S3 buckets.
# for bucket in s3.buckets.all():
# print(bucket.name)
# Get the object from the event and show its content type
bucket = '{bucket-name}'
key = 'abc/{user_cognito_identity_id}/test1.txt'
prefix = 'abc/{user_cognito_identity_id}'
try:
response = s3.get_object(
Bucket=bucket,
Key=key
)
# response = s3.list_objects(
# Bucket=bucket,
# Prefix=prefix,
# Delimiter='/'
# )
print(response)
return response
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
What I have verified:
authentication OK
identity with correct assumed role (printed the cognito identity ID and verified it's the correct authenticated user with the ID)
removed the ${cognito-identity.amazonaws.com:sub} and granted general access to authenticated role > I will be able to get, however the ${cognito-identity.amazonaws.com:sub} seems not able to detect and match well
So it seems that there's issue with the IAM policy
IAM policy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"*/${cognito-identity.amazonaws.com:sub}/*"
]
}
}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}
I tried listing bucket / get object / put object, all access denied.
I did try playing around with policys such as removing the listbucket condition (obviously it allows access then since i have authenticated) / changing "s3:prefix" to "${cognito-identity.amazonaws.com:sub}/" or "cognito/${cognito-identity.amazonaws.com:sub}/" but can't make anything work.
Same goes for put or get object.
My S3 folder is bucket-name/cognito/{cognito-user-identity-id}/key
I referred to:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_cognito-bucket.html
https://aws.amazon.com/blogs/mobile/understanding-amazon-cognito-authentication-part-3-roles-and-policies/
any insights on where might be wrong?
I managed to resolve this after changing the GetObject and PutObject policy Resources from
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/*"
to
"arn:aws:s3:::bucket-name/*/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/*/${cognito-identity.amazonaws.com:sub}/*"
and it works magically. I don't quite get why cognito would prevent the access since my bucket has cognito prefix after the bucket root, but this is resolved now.
I am trying to accomplish the following scenario:
1) Account A uploads a file to an S3 bucket owned by account B. At upload I specify full control for Account owner B
s3_client.upload_file(
local_file,
bucket,
remote_file_name,
ExtraArgs={'GrantFullControl': 'id=<AccountB_CanonicalID>'}
)
2) Account B defines a bucket policy that limits the access to the objects by IP (see below)
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowIPs",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketB/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
<CIDR1>,
<CIDR2>
]
}
}
}
]
}
I get access denied if I try to download the file as anonymous user, even from the specific IP range. If at upload I add public read permission for everyone then I can download the file from any IP.
s3_client.upload_file(
local_file, bucket,
remote_file_name,
ExtraArgs={
'GrantFullControl': 'id=AccountB_CanonicalID', GrantRead':'uri="http://acs.amazonaws.com/groups/global/AllUsers"'
}
)
Question: is it possible to upload the file from Account A to Account B but still restrict public access by an IP range.
This is not possible. According to the documentation:
Bucket Policy – For your bucket, you can add a bucket policy to grant
other AWS accounts or IAM users permissions for the bucket and the
objects in it. Any object permissions apply only to the objects that
the bucket owner creates. Bucket policies supplement, and in many
cases, replace ACL-based access policies.
However, there is a workaround for this scenario. The problem is that the owner of the uploaded file is Account A. We need to upload the file in such a way that the owner of the file is Account B. To accomplish this we need to:
In Account B create a role for trusted entity (select "Another AWS account" and specify Account A). Add upload permission for the bucket.
In Account A create a policy that allows AssumeRole action and as resource specify the ARN of the role created in step 1.
To upload the file from boto3 you can use the following code. Note the use of cachetools to deal with limited TTL of temporary credentials.
import logging
import sys
import time
import boto3
from cachetools import cached, TTLCache
CREDENTIALS_TTL = 1800
credentials_cache = TTLCache(1, CREDENTIALS_TTL - 60)
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s')
logger = logging.getLogger()
def main():
local_file = sys.argv[1]
bucket = '<bucket_from_account_B>'
client = _get_s3_client_for_another_account()
client.upload_file(local_file, bucket, local_file)
logger.info('Uploaded to %s to %s' % (local_file, bucket))
#cached(credentials_cache)
def _get_s3_client_for_another_account():
sts = boto3.client('sts')
response = sts.assume_role(
RoleArn='<arn_of_role_created_in_step_1>',
DurationSeconds=CREDENTIALS_TTL
)
credentials = response['Credentials']
credentials = {
'aws_access_key_id': credentials['AccessKeyId'],
'aws_secret_access_key': credentials['SecretAccessKey'],
'aws_session_token': credentials['SessionToken'],
}
return boto3.client('s3', 'eu-central-1', **credentials)
if __name__ == '__main__':
main()
I'm moving all the instances under each service from old AWS account into new AWS account. I've found ways to move EC2 and RDS into another account.
To move EC2 instance, I have created an AMI and shared with the new AWS account. Using that image I've created an instance
To move RDS instance, I've created a snapshot and shared with the new AWS account. I've restored the shared snapshot in the new account
Now I need to move Elasticsearch from old account to the new one. I couldn't able to figure out a way to move my Elasticsearch. Can anyone help me on this?
Create a role with Elasticsearch permission. You may also use the existing role with the following trust relationship,
{
"Effect": "Allow",
"Principal": {
"Service": "es.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
Provide the iam:PassRole for the iam user whose access/secret keys will be using to take snapshot.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::accountID:role/TheServiceRole"
}
}
Change the access & secret key, host, region, path, and payload in the below code and execute it.
import requests
from requests_aws4auth import AWS4Auth
AWS_ACCESS_KEY_ID=''
AWS_SECRET_ACCESS_KEY=''
region = 'us-west-1'
service = 'es'
awsauth = AWS4Auth(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, region, service)
host = 'https://elasticsearch-domain.us-west-1.es.amazonaws.com/' # include https:// and trailing /
# REGISTER REPOSITORY
path = '_snapshot/my-snapshot-repo' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "s3-bucket-name",
"region": "us-west-1",
"role_arn": "arn:aws:iam::accountID:role/TheServiceRole"
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers) # requests.get, post, put, and delete all have similar syntax
print(r.text)
To take the snapshot and store it in the S3
path = '_snapshot/my-snapshot-repo/my-snapshot'
url = host + path
r = requests.put(url, auth=awsauth)
print(r.text)
Now the snapshot is ready. Share this snapshot to another account and use the same code with new account keys and endpoint to restore it using the below code snippet.
To restore all indices from the snapshot
path = '_snapshot/my-snapshot-repo/my-snapshot/_restore'
url = host + path
r = requests.post(url, auth=awsauth)
print(r.text)
To restore single index from the snapshot
path = '_snapshot/my-snapshot-repo/my-snapshot/_restore'
url = host + path
payload = {"indices": "my-index"}
headers = {"Content-Type": "application/json"}
r = requests.post(url, auth=awsauth, json=payload, headers=headers)
print(r.text)
Reference: AWS docs.
I am getting an acccess denied error from S3 AWS service on my Lambda function.
This is the code:
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration.
exports.handler = function(event, context) {
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
/*
{
originalFilename: <string>,
versions: [
{
size: <number>,
crop: [x,y],
max: [x, y],
rotate: <number>
}
]
}*/
var fileInfo;
var dstBucket = "xmovo.transformedimages.develop";
try {
//TODO: Decompress and decode the returned value
fileInfo = JSON.parse(key);
//download s3File
// get reference to S3 client
var s3 = new AWS.S3();
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: key
},
function (err, response) {
if (err) {
console.log("Error getting from s3: >>> " + err + "::: Bucket-Key >>>" + srcBucket + "-" + key + ":::Principal>>>" + event.Records[0].userIdentity.principalId, err.stack);
return;
}
// Infer the image type.
var img = gm(response.Body);
var imageType = null;
img.identify(function (err, data) {
if (err) {
console.log("Error image type: >>> " + err);
deleteFromS3(srcBucket, key);
return;
}
imageType = data.format;
//foreach of the versions requested
async.each(fileInfo.versions, function (currentVersion, callback) {
//apply transform
async.waterfall([async.apply(transform, response, currentVersion), uploadToS3, callback]);
}, function (err) {
if (err) console.log("Error on excecution of watefall: >>> " + err);
else {
//when all done then delete the original image from srcBucket
deleteFromS3(srcBucket, key);
}
});
});
});
}
catch (ex){
context.fail("exception through: " + ex);
deleteFromS3(srcBucket, key);
return;
}
function transform(response, version, callback){
var imageProcess = gm(response.Body);
if (version.rotate!=0) imageProcess = imageProcess.rotate("black",version.rotate);
if(version.size!=null) {
if (version.crop != null) {
//crop the image from the coordinates
imageProcess=imageProcess.crop(version.size[0], version.size[1], version.crop[0], version.crop[1]);
}
else {
//find the bigger and resize proportioned the other dimension
var widthIsMax = version.size[0]>version.size[1];
var maxValue = Math.max(version.size[0],version.size[1]);
imageProcess=(widthIsMax)?imageProcess.resize(maxValue):imageProcess.resize(null, maxValue);
}
}
//finally convert the image to jpg 90%
imageProcess.toBuffer("jpg",{quality:90}, function(err, buffer){
if (err) callback(err);
callback(null, version, "image/jpeg", buffer);
});
}
function deleteFromS3(bucket, filename){
s3.deleteObject({
Bucket: bucket,
Key: filename
});
}
function uploadToS3(version, contentType, data, callback) {
// Stream the transformed image to a different S3 bucket.
var dstKey = fileInfo.originalFilename + "_" + version.size + ".jpg";
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
}, callback);
}
};
This is the error on Cloudwatch:
AccessDenied: Access Denied
This is the stack error:
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:329:35)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:596:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:21:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:37:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:598:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
Without any other description or info
on S3 bucket permissions allow to everyone put list and delete.
What can I do to access the S3 bucket?
PS: on Lambda event properties the principal is correct and has administrative privileges.
Interestingly enough, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.
If you are specifying the Resource don't forget to add the sub folder specification as well. Like this:
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
Your Lambda does not have privileges (S3:GetObject).
Go to IAM dashboard, check the role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It should show something similar to the attached image. Make sure S3:GetObject is listed.
I ran into this issue and after hours of IAM policy madness, the solution was to:
Go to S3 console
Click bucket you are interested in.
Click 'Properties'
Unfold 'Permissions'
Click 'Add more permissions'
Choose 'Any Authenticated AWS User' from dropdown. Select 'Upload/Delete' and 'List' (or whatever you need for your lambda).
Click 'Save'
Done.
Your carefully written IAM role policies don't matter, neither do specific bucket policies (I've written those too to make it work). Or they just don't work on my account, who knows.
[EDIT]
After a lot of tinkering the above approach is not the best. Try this:
Keep your role policy as in the helloV post.
Go to S3. Select your bucket. Click Permissions. Click Bucket Policy.
Try something like this:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME"
}
]
}
Worked for me and does not require for you to share with all authenticated AWS users (which most of the time is not ideal).
If you have encryption set on your S3 bucket (such as AWS KMS), you may need to make sure the IAM role applied to your Lambda function is added to the list of IAM > Encryption keys > region > key > Key Users for the corresponding key that you used to encrypt your S3 bucket at rest.
In my screenshot, for example, I added the CyclopsApplicationLambdaRole role that I have applied to my Lambda function as a Key User in IAM for the same AWS KMS key that I used to encrypt my S3 bucket. Don't forget to select the correct region for your key when you open up the Encryption keys UI.
Find the execution role you've applied to your Lambda function:
Find the key you used to add encryption to your S3 bucket:
In IAM > Encryption keys, choose your region and click on the key name:
Add the role as a Key User in IAM Encryption keys for the key specified in S3:
If all the other policy ducks are in a row, S3 will still return an Access Denied message if the object doesn't exist AND the requester doesn't have ListBucket permission on the bucket.
From https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html:
...If the object you request does not exist, the error Amazon S3
returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3 will
return an HTTP status code 404 ("no such key") error. if you don’t
have the s3:ListBucket permission, Amazon S3 will return an HTTP
status code 403 ("access denied") error.
I too ran into this issue, I fixed this by providing s3:GetObject* in the ACL as it is attempting to obtain a version of that object.
I tried to execute a basic blueprint Python lambda function [example code] and I had the same issue. My execition role was lambda_basic_execution
I went to S3 > (my bucket name here) > permissions .
Because I'm beginner, I used the Policy Generator provided by Amazon rather than writing JSON myself: http://awspolicygen.s3.amazonaws.com/policygen.html
my JSON looks like this:
{
"Id": "Policy153536723xxxx",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt153536722xxxx",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::tokabucket/*",
"Principal": {
"AWS": [
"arn:aws:iam::82557712xxxx:role/lambda_basic_execution"
]
}
}
]
And then the code executed nicely:
I solved my problem following all the instruction from the AWS - How do I allow my Lambda execution role to access my Amazon S3 bucket?:
Create an AWS Identity and Access Management (IAM) role for the Lambda function that grants access to the S3 bucket.
Modify the IAM role's trust policy.
Set the IAM role as the Lambda function's execution role.
Verify that the bucket policy grants access to the Lambda function's execution role.
I was trying to read a file from s3 and create a new file by changing content of file read (Lambda + Node). Reading file from S3 did not had any problem. As soon I tried writing to S3 bucket I get 'Access Denied' error.
I tried every thing listed above but couldn't get rid of 'Access Denied'. Finally I was able to get it working by giving 'List Object' permission to everyone on my bucket.
Obviously this not the best approach but nothing else worked.
After searching for a long time i saw that my bucket policy was only allowed read access and not put access:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicListGet",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:List*",
"s3:Get*",
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::bucketName",
"arn:aws:s3:::bucketName/*"
]
}
]
}
Also another issue might be that in order to fetch objects from cross region you need to initialize new s3 client with other region name like:
const getS3Client = (region) => new S3({ region })
I used this function to get s3 client based on region.
I was struggling with this issue for hours. I was using AmazonS3EncryptionClient and nothing I did helped. Then I noticed that the client is actually deprecated, so I thought I'd try switching to the builder model they have:
var builder = AmazonS3EncryptionClientBuilder.standard()
.withEncryptionMaterials(new StaticEncryptionMaterialsProvider(encryptionMaterials))
if (accessKey.nonEmpty && secretKey.nonEmpty) builder = builder.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey.get, secretKey.get)))
builder.build()
And... that solved it. Looks like Lambda has trouble injecting the credentials in the old model, but works well in the new one.
I was getting the same error "AccessDenied: Access Denied" while cropping s3 images using lambda function. I updated the s3 bucket policy and IAM role inline policy as per the document link given below.
But still, I was getting the same error. Then I realised, I was trying to give "public-read" access in a private bucket. After removed ACL: 'public-read' from S3.putObject problem get resolved.
https://aws.amazon.com/premiumsupport/knowledge-center/access-denied-lambda-s3-bucket/
I had this error message in aws lambda environment when using boto3 with python:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
It turns out I needed an extra permission because I was using object tags. If your objects have tags you will need
s3:GetObject AND s3:GetObjectTagging for getting the object.
I have faced the same problem when creating Lambda function that should have read S3 bucket content. I created the Lambda function and S3 bucket using AWS CDK. To solve this within AWS CDK, I used magic from the docs.
Resources that use execution roles, such as lambda.Function, also
implement IGrantable, so you can grant them access directly instead of
granting access to their role. For example, if bucket is an Amazon S3
bucket, and function is a Lambda function, the code below grants the
function read access to the bucket.
bucket.grantRead(function);
We're using the AWS SDK (.net) and have successfully uploaded files through our program using PutObjectRequest. I know how to set the ACL permissions on the file once it's created, But when trying to Get the file using GetObjectRequest our application is getting "Access Denied". I realize that I don't know what the userID is for the application that's running. How can I make sure my application has the permissions needed to read the file, without using "public" rights? (setting the ACL on the file to public works for the application).
Is there a way to make the application retrieve a file AS a certain user or group?
Any AWS API call request, need an access key and secret key pair which can be set from:
Hard coded in the app,
AWS configuration file (by default it is in ~/.aws/credentials or C:\Users\*USERNAME*\.aws\credentials), or
Instance role for EC2 instances.
For the #1 and #2, you can check your IAM user permission in https://console.aws.amazon.com/iam/home?region=us-east-1#users.
For #3, you can check your IAM role in https://console.aws.amazon.com/iam/home?region=us-east-1#roles.
Make sure the user/role have enough permission to read your S3 object. You can attach this policy into the user/role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1455021573875",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}
Adjust the above policy. Your app should have read object access now.
Btw, you can set ACL while creating a resource. You can find the documentation on http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-using-dot-net-sdk.html.
static string bucketName = "*** Provide existing bucket name ***";
static string newBucketName = "*** Provide a name for a new bucket ***";
static string newKeyName = "*** Provide a name for a new key ***";
IAmazonS3 client;
client = new AmazonS3Client(Amazon.RegionEndpoint.USEast1);
// Retrieve ACL from one of the owner's buckets
S3AccessControlList acl = client.GetACL(new GetACLRequest
{
BucketName = bucketName,
}).AccessControlList;
// Describe grant for full control for owner.
S3Grant grant1 = new S3Grant
{
Grantee = new S3Grantee { CanonicalUser = acl.Owner.Id },
Permission = S3Permission.FULL_CONTROL
};
// Describe grant for write permission for the LogDelivery group.
S3Grant grant2 = new S3Grant
{
Grantee = new S3Grantee { URI = "http://acs.amazonaws.com/groups/s3/LogDelivery" },
Permission = S3Permission.WRITE
};
PutBucketRequest request = new PutBucketRequest()
{
BucketName = newBucketName,
BucketRegion = S3Region.US,
Grants = new List<S3Grant> { grant1, grant2 }
};
PutBucketResponse response = client.PutBucket(request);
PutObjectRequest objectRequest = new PutObjectRequest()
{
ContentBody = "Object data for simple put.",
BucketName = newBucketName,
Key = newKeyName,
Grants = new List<S3Grant> { grant1 }
};
PutObjectResponse objectResponse = client.PutObject(objectRequest);