AWS S3 files uploaded partially - amazon-web-services

I am using AWS JavaScript SDK 2 to upload files from my Webapplication. While uploading a large no files like 200 or more its showing success but files were not displayed in AWS consoles, many files were missing.
I am also making a head-object call to verify if file is uploaded successfully or not, which is giving success but still files are missing. Below is my code
// Upload file
const params = {
Bucket: bucket,
Key: directory + fileName,
Body: file,
};
await s3Client.upload(params).promise();
// Check if uploaded successfully
const headParams = {
Bucket: bucket,
Key: directory + fileName,
};
const fileDetails = await s3Client.headObject(headParams).promise();
if (fileSize === fileDetails.ContentLength) {
// Uploaded successfuly
}
is there anything I am missing?
Thanks!

Related

uplode image to amazon s3 using #aws-sdk/client-s3 ang get its location

i am trying upload an in image file to s3 but get this error says :
ERROR: MethodNotAllowed: The specified method is not allowed against this resource.
my code using #aws-sdk/client-s3 package to upload wth this code :
const s3 = new S3({
region: 'us-east-1',
credentials: {
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
}
});
exports.uploadFile = async options => {
options.internalPath = options.internalPath || (`${config.s3.internalPath + options.moduleName}/`);
options.ACL = options.ACL || 'public-read';
logger.info(`Uploading [${options.path}]`);
const params = {
Bucket: config.s3.bucket,
Body: fs.createReadStream(options.path),
Key: options.internalPath + options.fileName,
ACL: options.ACL
};
try {
const s3Response = await s3.completeMultipartUpload(params);
if (s3Response) {
logger.info(`Done uploading, uploaded to: ${s3Response.Location}`);
return { url: s3Response.Location };
}
} catch (err) {
logger.error(err, 'unable to upload:');
throw err;
}
};
I am not sure what this error mean and once the file is uploaded I need to get his location in s3
thanks for any help
For uploading a single image file you need to be calling s3.upload() not s3.completeMultipartUpload().
If you had very large files and wanted to upload then in multiple parts, the workflow would look like:
s3.createMultipartUpload()
s3.uploadPart()
s3.uploadPart()
...
s3.completeMultipartUpload()
Looking at the official documentation, It looks like the new way to do a simple S3 upload in the JavaScript SDK is this:
s3.send(new PutObjectCommand(uploadParams));

S3 Bucket copy object

I'm currently working on the AWS s3 bucket and its services. I'm copying my object from one bucket to another bucket within the folder. And in response, I'm comparing for eTags from metadata. If these tags are equal then I'm returning destination bucket's image path. But While from reactjs I'm rendering my response, it is showing me a broken image. and on refresh, it is showing me the proper results. I'm not getting why this is happening.
ObjectMetadata metadata = s3client.getObjectMetadata(bucketName, sourceKey);
CopyObjectResult copyObjectResult = s3client.copyObject(bucketName, sourceKey, bucketName, destinationKey);
if (metadata.getETag().equals(copyObjectResult.getETag())) {
s3client.deleteObject(bucketName, sourceKey);
LOG.info("profile successfully uploaded to bucket");
return s3BucketConfiguration.getS3URL() + "/" + Constants.REVIEWER_DIR + "/" + FilenameUtils.getName(url.getPath());
} else {
LOG.error("error in upload profile to bucket");
return String.format("%s/%s/%s", s3BucketConfiguration.getS3URL(), Constants.REVIEWER_DIR, Constants.DEFAULT_IMAGE);
}
Here each time I got LOG : profile successfully uploaded to bucket.
And still, it renders broken image. I'm confused that what should be the problem is.
please help me out with this.

delete folder from s3 nodejs

Hey guys I was trying to delete a folder from s3 with stuff in it but deleteObjects wasn't working so I found this script online and it works great my question is why does it work? Why do you have to listObjects when deleting a folder on s3 why cant I just pass it the objects name? Why doesn't It error when I attempt to delete the folder without listing the objects first.
first attempt (doesnt work)
var filePath2 = "templates/" + key + "/test/";
var toPush = { Key: filePath2 };
deleteParams.Delete.Objects.push(toPush);
console.log("deleteParams", deleteParams);
console.log("deleteParams.Delete", deleteParams.Delete);
const deleteResult = await s3.deleteObjects(deleteParams).promise();
console.log("deleteResult", deleteResult);
keep in mind folderPath2 is a folder that has other stuff in it I get no error but yet the catch isn't triggered and it says deleted and than the folder name.
second attempt (works)
async function deleteFromS3(bucket, path) {
const listParams = {
Bucket: bucket,
Prefix: path
};
const listedObjects = await s3.listObjectsV2(listParams).promise();
console.log("listedObjects", listedObjects);
if (listedObjects.Contents.length === 0) return;
const deleteParams = {
Bucket: bucket,
Delete: { Objects: [] }
};
listedObjects.Contents.forEach(({ Key }) => {
deleteParams.Delete.Objects.push({ Key });
});
console.log("deleteParams", deleteParams);
const deleteResult = await s3.deleteObjects(deleteParams).promise();
console.log("deleteResult", deleteResult);
if (listedObjects.IsTruncated && deleteResult)
await deleteFromS3(bucket, path);
}
than I call the function like so
const result = await deleteFromS3(myBucketName, folderPath);
Folders do not exist in Amazon S3. It is a flat object storage system, where the filename (Key) for each object contains the full path.
While Amazon S3 does support the concept of a Common Prefix, which can make things appear as though they are in folders/directories, folders do not actually exist.
For example, you could run a command like this:
aws s3 cp foo.txt s3://my-bucket/folder1/folder2/foo.txt
This would work even if the folders do not exist! It is merely storing an object with a Key of folder1/folder2/foo.txt.
If you were then to delete that object, the 'folder' would disappear because no object has it as a path. That is because the folder never actually existed.
Sometimes people want an empty folder to appear, so they create a zero-length object with the same name as the folder, eg folder1/folder2/.
So, your first program did not work because it deleted the 'folder', which has nothing to do with deleting the content of the folder (since there is no concept of 'content' of a folder).

AWS S3 copy to bucket from remote location

There is a large dataset on a public server (~0.5TB, multi-part here), which I would like to copy into my own s3 buckets. It seems like aws s3 cp is only for local files or files based in S3 buckets?
How can I copy that file (either single or multi-part) into S3? Can I use the AWS CLI or do i need to something else?
There's no way to upload it directly to S3 from the remote location. But you can stream the contents of the remote files to your machine and then up to S3. This means that you will have downloaded the entire 0.5TB of data, but your computer will only ever hold a tiny fraction of that data in memory at a time (it will not be persisted to disc either). Here is a simple implementation in javascript:
const request = require('request')
const async = require('async')
const AWS = require('aws-sdk')
const s3 = new AWS.S3()
const Bucket = 'nyu_depth_v2'
const baseUrl = 'http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/'
const parallelLimit = 5
const parts = [
'basements.zip',
'bathrooms_part1.zip',
'bathrooms_part2.zip',
'bathrooms_part3.zip',
'bathrooms_part4.zip',
'bedrooms_part1.zip',
'bedrooms_part2.zip',
'bedrooms_part3.zip',
'bedrooms_part4.zip',
'bedrooms_part5.zip',
'bedrooms_part6.zip',
'bedrooms_part7.zip',
'bookstore_part1.zip',
'bookstore_part2.zip',
'bookstore_part3.zip',
'cafe.zip',
'classrooms.zip',
'dining_rooms_part1.zip',
'dining_rooms_part2.zip',
'furniture_stores.zip',
'home_offices.zip',
'kitchens_part1.zip',
'kitchens_part2.zip',
'kitchens_part3.zip',
'libraries.zip',
'living_rooms_part1.zip',
'living_rooms_part2.zip',
'living_rooms_part3.zip',
'living_rooms_part4.zip',
'misc_part1.zip',
'misc_part2.zip',
'office_kitchens.zip',
'offices_part1.zip',
'offices_part2.zip',
'playrooms.zip',
'reception_rooms.zip',
'studies.zip',
'study_rooms.zip'
]
async.eachLimit(parts, parallelLimit, (Key, cb) => {
s3.upload({
Key,
Bucket,
Body: request(baseUrl + Key)
}, cb)
}, (err) => {
if (err) console.error(err)
else console.log('Done')
})

Image file cut off when uploading to AWS S3 bucket via Django and Boto3

When I upload a larger image (3+ MB) to an AWS S3 bucket, only part of the image is being being saved to the bucket (about the top 10% of the image, the rest displaying as grey space). These images consistently show 256 KB size. There isn't any issue with smaller files.
Here's my code:
s3 = boto3.resource('s3')
s3.Bucket(settings.AWS_MEDIA_BUCKET_NAME).put_object(Key=fname, Body=data)
...where data is binary data of image file.
No issues when files are smaller size, and in the S3 bucket the larger files all show as 256 KB.
I haven't been able to find any documentation about why this might be happening. Can someone please point out what I'm missing?
Thanks!
I had the same issue and it took me hours to figure it out. I finally fixed it by creating a stream. This is my code:
const uploadFile = (filePath) => {
let fileName = filePath;
fs.readFile(fileName, (err, data) => {
let body= fs.createReadStream(filePath);
if (err) throw err;
const params = {
Bucket: 'bucketname', // pass your bucket name
Key: fileName;
Body: body,
ContentType: 'image/jpeg',
ContentEncoding: 'base64',
};
s3.upload(params, function(s3Err, data) {
if (s3Err) throw s3Err;
console.log(`File uploaded successfully at ${data.Location}`);
});
});
};