S3 Bucket copy object - amazon-web-services

I'm currently working on the AWS s3 bucket and its services. I'm copying my object from one bucket to another bucket within the folder. And in response, I'm comparing for eTags from metadata. If these tags are equal then I'm returning destination bucket's image path. But While from reactjs I'm rendering my response, it is showing me a broken image. and on refresh, it is showing me the proper results. I'm not getting why this is happening.
ObjectMetadata metadata = s3client.getObjectMetadata(bucketName, sourceKey);
CopyObjectResult copyObjectResult = s3client.copyObject(bucketName, sourceKey, bucketName, destinationKey);
if (metadata.getETag().equals(copyObjectResult.getETag())) {
s3client.deleteObject(bucketName, sourceKey);
LOG.info("profile successfully uploaded to bucket");
return s3BucketConfiguration.getS3URL() + "/" + Constants.REVIEWER_DIR + "/" + FilenameUtils.getName(url.getPath());
} else {
LOG.error("error in upload profile to bucket");
return String.format("%s/%s/%s", s3BucketConfiguration.getS3URL(), Constants.REVIEWER_DIR, Constants.DEFAULT_IMAGE);
}
Here each time I got LOG : profile successfully uploaded to bucket.
And still, it renders broken image. I'm confused that what should be the problem is.
please help me out with this.

Related

unable to get object metadata from S3. Check object key, region and/or access permissions."

I have a Lambda function that scans for text and is triggered by an S3 bucket. I get this error when trying to upload a photo directly into s3 bucket using browser
Unable to get object metadata from S3. Check object key, region, and/or access permissions
However, if I hardcode the key (e.g., image01.jpg) which is in my bucket, there are no errors.
import json
import boto3
def lambda_handler(event, context):
# Get bucket and file name
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
location = key[:17]
s3Client = boto3.client('s3')
client = boto3.client('rekognition', region_name='us-east-1')
response=client.detect_text(Image={'S3Object':
{'Bucket':'myarrowbucket','Name':key}})
detectedText = response['TextDetections']
I am confused as it was working a few weeks ago but now i am getting that error
ANSWER
I have seen this question answered many times and i tried every solution , the one which worked for me was 'key' name . i was getting the metadata error when the filename contained special characters e.g - or _ but when i changed the names of the files uploaded it works . Hope this answer helps someone.

AWS S3 files uploaded partially

I am using AWS JavaScript SDK 2 to upload files from my Webapplication. While uploading a large no files like 200 or more its showing success but files were not displayed in AWS consoles, many files were missing.
I am also making a head-object call to verify if file is uploaded successfully or not, which is giving success but still files are missing. Below is my code
// Upload file
const params = {
Bucket: bucket,
Key: directory + fileName,
Body: file,
};
await s3Client.upload(params).promise();
// Check if uploaded successfully
const headParams = {
Bucket: bucket,
Key: directory + fileName,
};
const fileDetails = await s3Client.headObject(headParams).promise();
if (fileSize === fileDetails.ContentLength) {
// Uploaded successfuly
}
is there anything I am missing?
Thanks!

Generate presigned s3 URL of latest object in the bucket using boto3

I have a s3 bucket with multiple folders. How can I generate s3 presigned URL for a latest object using python boto3 in aws for each folder asked by a user?
You can do something like
import boto3
from botocore.client import Config
import requests
bucket = 'bucket-name'
folder = '/' #you can add folder path here don't forget '/' at last
s3 = boto3.client('s3',config=Config(signature_version='s3v4'))
objs = s3.list_objects(Bucket=bucket, Prefix=folder)['Contents']
latest = max(objs, key=lambda x: x['LastModified'])
print(latest)
print (" Generating pre-signed url...")
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': latest['Key']
}
)
print(url)
response = requests.get(url)
print(response.url)
here it will give the latest last modified file from the whole bucket however you can update login and update prefix value as per need.
if you are using Kubernetes POD, VM, or anything you can pass environment variables or use the python dict to store the latest key if required.
If it's a small bucket then recursively list the bucket, with prefix as needed. Sort the results by timestamp, and create the pre-signed URL for the latest.
If it's a very large bucket, this will be very inefficient and you should consider other ways to store the key of the latest file. For example: trigger a Lambda function whenever an object is uploaded and write that object's key into a LATEST item in DynamoDB (or other persistent store).

Image upload permission for amazon s3 bucket

I am facing issue for permission while uploading the image to s3 bucket from my ios code. I am able to upload it on s3 bucket. but how can i upload the image with public permission on s3 bucket so i can read/view the image .I tried as below but its shows me as deprecated. I attached the screenshot as well
if let _ = task.result {
DispatchQueue.main.async {
print("Thumb Image Upload Starting!")
let request = AWSS3PutObjectAclRequest()
request?.bucket = self.bucketName
request?.key = keyName
request?.acl = AWSS3ObjectCannedACL.publicReadWrite
let s3Service = AWSS3.default()
s3Service.putObjectAcl(request!)
}
}
To view images from S3, the best approach is to use a presigned URL instead of making it public.
To do so, you can follow the instructions on this link: https://github.com/awsdocs/aws-mobile-developer-guide/blob/master/doc_source/how-to-ios-s3-presigned-urls.rst

Image file cut off when uploading to AWS S3 bucket via Django and Boto3

When I upload a larger image (3+ MB) to an AWS S3 bucket, only part of the image is being being saved to the bucket (about the top 10% of the image, the rest displaying as grey space). These images consistently show 256 KB size. There isn't any issue with smaller files.
Here's my code:
s3 = boto3.resource('s3')
s3.Bucket(settings.AWS_MEDIA_BUCKET_NAME).put_object(Key=fname, Body=data)
...where data is binary data of image file.
No issues when files are smaller size, and in the S3 bucket the larger files all show as 256 KB.
I haven't been able to find any documentation about why this might be happening. Can someone please point out what I'm missing?
Thanks!
I had the same issue and it took me hours to figure it out. I finally fixed it by creating a stream. This is my code:
const uploadFile = (filePath) => {
let fileName = filePath;
fs.readFile(fileName, (err, data) => {
let body= fs.createReadStream(filePath);
if (err) throw err;
const params = {
Bucket: 'bucketname', // pass your bucket name
Key: fileName;
Body: body,
ContentType: 'image/jpeg',
ContentEncoding: 'base64',
};
s3.upload(params, function(s3Err, data) {
if (s3Err) throw s3Err;
console.log(`File uploaded successfully at ${data.Location}`);
});
});
};