I am facing issue for permission while uploading the image to s3 bucket from my ios code. I am able to upload it on s3 bucket. but how can i upload the image with public permission on s3 bucket so i can read/view the image .I tried as below but its shows me as deprecated. I attached the screenshot as well
if let _ = task.result {
DispatchQueue.main.async {
print("Thumb Image Upload Starting!")
let request = AWSS3PutObjectAclRequest()
request?.bucket = self.bucketName
request?.key = keyName
request?.acl = AWSS3ObjectCannedACL.publicReadWrite
let s3Service = AWSS3.default()
s3Service.putObjectAcl(request!)
}
}
To view images from S3, the best approach is to use a presigned URL instead of making it public.
To do so, you can follow the instructions on this link: https://github.com/awsdocs/aws-mobile-developer-guide/blob/master/doc_source/how-to-ios-s3-presigned-urls.rst
Related
I am creating a bucket programmatically as follows:
String bucketName = UUID.randomUUID().toString();
List<Acl> aclList = new ArrayList<>();
if (gcsBucketEntity.isPublic()) {
Acl publicAccessAcl = Acl.newBuilder(Acl.User.ofAllUsers(), Acl.Role.READER).build();
aclList.add(publicAccessAcl);
}
BucketInfo bucketInfo = BucketInfo
.newBuilder(bucketName)
.setLocation(gcsBucketEntity.getLocation()) // Multi-regions
.setStorageClass(valueOfStrict(gcsBucketEntity.getStorageType().toString()))
.setAcl(aclList)
.build();
Bucket bucket = this.storage.create(bucketInfo);
I have also tried to set a BucketTargetOption instead:
Storage.BucketTargetOption bucketTargetOption = Storage.BucketTargetOption
.predefinedAcl(Storage.PredefinedAcl.PUBLIC_READ);
Bucket bucket = this.storage.create(bucketInfo, bucketTargetOption);
with the exact same result.
The bucket is created and in the GCP console I can see that the access is public.
However, I am not able to access any files and I get a AccessDenied error instead:
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.</Details>
</Error>
If I create the bucker manually I think I have to add a Storage Object Viewer role to the user allUsers:
This is the only difference I can see between the manually and automatically created bucket so my question is..
How do I add this permission programmatically?
There is actually an example in the docs.
Apparently we have to create the bucket first and set the IAM-policy afterwards.
BucketInfo bucketInfo = BucketInfo
.newBuilder(bucketName)
.setLocation(gcsBucketEntity.getLocation()) // Multi-regions
.setStorageClass(valueOfStrict(gcsBucketEntity.getStorageType().toString()))
.build();
Bucket bucket = this.storage.create(bucketInfo);
if (gcsBucketEntity.isPublic()) {
Policy policy = this.storage.getIamPolicy(bucketName);
this.storage.setIamPolicy(
bucket.getName(),
policy.toBuilder()
.addIdentity(StorageRoles.objectViewer(), Identity.allUsers())
.build()
);
}
This is a bit odd imho because if something goes wrong I might end up with a "broken" bucket.
Anyway, the above code works for me.
I have a s3 bucket with multiple folders. How can I generate s3 presigned URL for a latest object using python boto3 in aws for each folder asked by a user?
You can do something like
import boto3
from botocore.client import Config
import requests
bucket = 'bucket-name'
folder = '/' #you can add folder path here don't forget '/' at last
s3 = boto3.client('s3',config=Config(signature_version='s3v4'))
objs = s3.list_objects(Bucket=bucket, Prefix=folder)['Contents']
latest = max(objs, key=lambda x: x['LastModified'])
print(latest)
print (" Generating pre-signed url...")
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': latest['Key']
}
)
print(url)
response = requests.get(url)
print(response.url)
here it will give the latest last modified file from the whole bucket however you can update login and update prefix value as per need.
if you are using Kubernetes POD, VM, or anything you can pass environment variables or use the python dict to store the latest key if required.
If it's a small bucket then recursively list the bucket, with prefix as needed. Sort the results by timestamp, and create the pre-signed URL for the latest.
If it's a very large bucket, this will be very inefficient and you should consider other ways to store the key of the latest file. For example: trigger a Lambda function whenever an object is uploaded and write that object's key into a LATEST item in DynamoDB (or other persistent store).
I'm currently working on the AWS s3 bucket and its services. I'm copying my object from one bucket to another bucket within the folder. And in response, I'm comparing for eTags from metadata. If these tags are equal then I'm returning destination bucket's image path. But While from reactjs I'm rendering my response, it is showing me a broken image. and on refresh, it is showing me the proper results. I'm not getting why this is happening.
ObjectMetadata metadata = s3client.getObjectMetadata(bucketName, sourceKey);
CopyObjectResult copyObjectResult = s3client.copyObject(bucketName, sourceKey, bucketName, destinationKey);
if (metadata.getETag().equals(copyObjectResult.getETag())) {
s3client.deleteObject(bucketName, sourceKey);
LOG.info("profile successfully uploaded to bucket");
return s3BucketConfiguration.getS3URL() + "/" + Constants.REVIEWER_DIR + "/" + FilenameUtils.getName(url.getPath());
} else {
LOG.error("error in upload profile to bucket");
return String.format("%s/%s/%s", s3BucketConfiguration.getS3URL(), Constants.REVIEWER_DIR, Constants.DEFAULT_IMAGE);
}
Here each time I got LOG : profile successfully uploaded to bucket.
And still, it renders broken image. I'm confused that what should be the problem is.
please help me out with this.
I have created an application in Yii2. I want to provide the functionality to the admin user to upload files from the admin panel and those files will upload on s3 bucket. My bucket is a private bucket not public. Admin users can upload those images and video files into the bucket, end-user is able to get those files from the android application. I am not able to upload files into a bucket. I got error like
S3::putObject(): [InvalidRequest] The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
my code is as below
public static function uploadToS3($fileTempName, $newfilename, $metaHeaders = array(), $contentType = null) {
// create AWS object for file upload
$s3 = new S3(Helpers::yiiparam('s3_access_key'), Helpers::yiiparam('s3_secret_access_key'));
// connect AWS bucket
if($s3->putObjectFile($fileTempName, Helpers::yiiparam('s3_bucket_name') , $newfilename, S3::ACL_PUBLIC_READ)){
$msg = "S3 Upload Successful.";
$s3file='http://'.Helpers::yiiparam('s3_bucket_name').'.s3.amazonaws.com/'.$newfilename;
echo "<img src='$s3file'/>";
echo 'S3 File URL:'.$s3file;
}else{
echo 'msg='.$msg = "S3 Upload Fail.";
exit;
}
exit;
}
s3 class php file code
public static function putObjectFile($file, $bucket, $uri, $acl = self::ACL_PRIVATE, $metaHeaders = array(), $contentType = null) {
return self::putObject(S3::inputFile($file), $bucket, $uri, $acl, $metaHeaders, $contentType);
}
Can you please provide a solution to resolve this issue?
I've created a user in IAM, and attached 2 managed policies: AmazonS3FullAccess and AdministratorAccess. I would like to upload files to an S3 bucket called "pscfront".
I am using the following code to do the upload:
AWSCredentials credentials = new BasicAWSCredentials(Constants.AmazonWebServices.AccessKey, Constants.AmazonWebServices.SecretKey);
using (var client = new AmazonS3Client(credentials, RegionEndpoint.USEast1))
{
var loc = client.GetBucketLocation("s3.amazonaws.com/pscfront");
var tu = new TransferUtility(client);
tu.Upload(filename, Constants.AmazonWebServices.BucketName, keyName);
}
This fails with the exception "AccessDenied" (inner exception "The remote server returned an error: (403) Forbidden.") at the call to GetBucketLocation, or at the tu.Upload call if I comment that line out.
Any idea what gives?
smdh
Nothing wrong with the persmissions -- I was setting the bucket name incorrectly.You just pass the plan bucket name -- "pscfront" in this case.