Hey guys I was trying to delete a folder from s3 with stuff in it but deleteObjects wasn't working so I found this script online and it works great my question is why does it work? Why do you have to listObjects when deleting a folder on s3 why cant I just pass it the objects name? Why doesn't It error when I attempt to delete the folder without listing the objects first.
first attempt (doesnt work)
var filePath2 = "templates/" + key + "/test/";
var toPush = { Key: filePath2 };
deleteParams.Delete.Objects.push(toPush);
console.log("deleteParams", deleteParams);
console.log("deleteParams.Delete", deleteParams.Delete);
const deleteResult = await s3.deleteObjects(deleteParams).promise();
console.log("deleteResult", deleteResult);
keep in mind folderPath2 is a folder that has other stuff in it I get no error but yet the catch isn't triggered and it says deleted and than the folder name.
second attempt (works)
async function deleteFromS3(bucket, path) {
const listParams = {
Bucket: bucket,
Prefix: path
};
const listedObjects = await s3.listObjectsV2(listParams).promise();
console.log("listedObjects", listedObjects);
if (listedObjects.Contents.length === 0) return;
const deleteParams = {
Bucket: bucket,
Delete: { Objects: [] }
};
listedObjects.Contents.forEach(({ Key }) => {
deleteParams.Delete.Objects.push({ Key });
});
console.log("deleteParams", deleteParams);
const deleteResult = await s3.deleteObjects(deleteParams).promise();
console.log("deleteResult", deleteResult);
if (listedObjects.IsTruncated && deleteResult)
await deleteFromS3(bucket, path);
}
than I call the function like so
const result = await deleteFromS3(myBucketName, folderPath);
Folders do not exist in Amazon S3. It is a flat object storage system, where the filename (Key) for each object contains the full path.
While Amazon S3 does support the concept of a Common Prefix, which can make things appear as though they are in folders/directories, folders do not actually exist.
For example, you could run a command like this:
aws s3 cp foo.txt s3://my-bucket/folder1/folder2/foo.txt
This would work even if the folders do not exist! It is merely storing an object with a Key of folder1/folder2/foo.txt.
If you were then to delete that object, the 'folder' would disappear because no object has it as a path. That is because the folder never actually existed.
Sometimes people want an empty folder to appear, so they create a zero-length object with the same name as the folder, eg folder1/folder2/.
So, your first program did not work because it deleted the 'folder', which has nothing to do with deleting the content of the folder (since there is no concept of 'content' of a folder).
Related
I have multiple folders in an s3 bucket and each folder contains some .txt files. Now I want to fetch just 10 .txt files from a given folder using javascript API.
For eg: the path is something like this
s3bucket/folder1/folder2/folder3/id
Now folder id is the one containing multiple .txt files. There are multiple id folders inside folder3. I want to pass id and get 10 s3 objects which have id as prefix. Is this possible using listObjectsV2? How do I limit the response to just 10 objects.
____obj1.txt
______id1----|____obj2.txt
| _____obj3.txt
|_____ id2---|____obj4.txt
s3bucket/folder1/folder2/folder3-| ____obj5.txt
|_____ id3---|____obj6.txt
So if I pass
var params= {Bucket:"s3bucket",Key:"folder1/folder2/folder3/id1"}
I should get obj1.txt and obj2.txt in response.
Which S3 method are you using? I suggest to use listObjectsV2 to achieve your goal. A possible call might look as the following
const s3 = new AWS.S3();
const { Contents } = await s3.listObjectsV2({
Bucket: 's3bucket',
Prefix: 'folder1/folder2/folder3/id1',
MaxKeys: 10
}).promise();
To get the object values you need to call getObject on each Key.
const responses = await Promise.all((Contents || []).map(({ Key }) => (
s3.getObject({
Bucket: 's3bucket',
Key
}).promise()
)));
I set the projectId, the bucket name and content type of a file, but I need te upload it to a particular folder into the destination bucket. How do I set the complete url?
This is my preliminary code:
I tried to add a directory inside bucketname or inside filename, but it doesn't work. It seems to be a parameter, but I don't know where do I have to set it.
var newObject = new Google.Apis.Storage.v1.Data.Object{
Bucket = "bucketName",
Name = "filename",
ContentType = "fileContentType"};
var credential = Google.Apis.Auth.OAuth2.GoogleCredential.FromJson(System.IO.File.ReadAllText("credentials.json"));
using (var storageClient = Google.Cloud.Storage.V1.StorageClient.Create(credential)){
using (var fileStream = new System.IO.FileStream(filePath, System.IO.FileMode.Open)){
var uploadObjectOptions = new Google.Cloud.Storage.V1.UploadObjectOptions();
await storageClient.UploadObjectAsync(newObject,fileStream,uploadObjectOptions,progress: null).ConfigureAwait(false);
}
}
return Ok();
You're adding the "folder" in the wrong place. Note that Google Cloud Storage doesn't have real folders or directories, instead it uses simulated directories which is really just an object with a prefix in its name.
bucket = bucket
object = folderName/objectname
I tried posting the following question on the AWS forum but I got the error message - "Your account is not ready for posting messages yet.", which is why I'm posting this here.
I am reading through the following example code for Amazon S3:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-example-photo-album-full.html
Whenever a new object is created, the example code nests the object within a nameless sub-folder like so:
function addPhoto(albumName) {
var files = document.getElementById('photoupload').files;
if (!files.length) {
return alert('Please choose a file to upload first.');
}
var file = files[0];
var fileName = file.name;
// Why is the photo placed in a nameless subfolder (below)?
var albumPhotosKey = encodeURIComponent(albumName) + '//';
...
Is there a particular reason / need for this?
I am using an https triggered Google Cloud Function that is supposed to download a file from Google Cloud Storage (and then combine it with data from req.body). While it seems to work as long as the downloaded file is in the root directory I am having problems accessing the same file when placed inside a folder. The path to the file is documents/someTemplate.docx
'use strict';
const functions = require('firebase-functions');
const path = require('path');
const os = require("os");
const fs = require('fs');
const gcconfig = {
projectId: "MYPROJECTNAME",
keyFilename: "KEYNAME.json"
};
const Storage = require('#google-cloud/storage')(gcconfig)
const bucketPath = 'MYPROJECTNAME.appspot.com'
const bucket = Storage.bucket(bucketPath);
exports.getFileFromStorage = functions.https.onRequest((req, res) => {
let fileName = 'documents/someTemplate.docx'
let tempFilePath = path.join(os.tmpdir(), fileName);
return bucket.file(fileName)
.download({
destination: tempFilePath,
})
.then(() => {
console.log(fileName + ' downloaded locally to', tempFilePath);
let content = fs.readFileSync(tempFilePath, 'binary');
// do stuff with the file and data from req.body
return
})
.catch(err => {
res.status(500).json({
error: err
});
});
})
What I don't understand is that when I move the file to the root directory and use the file name someTemplate.docx instead then the code works.
Google's documentation states that
Objects added to a folder appear to reside within the folder in the GCP Console. In reality, all objects exist at the bucket level, and simply include the directory structure in their name. For example, if you create a folder named pets and add a file cat.jpeg to that folder, the GCP Console makes the file appear to exist in the folder. In reality, there is no separate folder entity: the file simply exists in the bucket and has the name pets/cat.jpeg.
This seems to be correct as in the metadata the file name is indeed documents/someTemplate.docx. Therefore I don't understand why the code above does not work.
Posting comment answer from #James Poag for visibility:
Also, perhaps the directory doesn't exist on the temp folder location? Maybe try let tempFilePath = path.join(os.tmpdir(), 'tempkjhgfhjnmbvgh.docx'); – James Poag Aug 21 at 17:10
I have a simple single-page app, that is deployed to an S3 bucket using gulp-awspublish. We use inquirer.js (via gulp-prompt) to ask the developer which bucket to deploy to.
Sometimes the app may be deployed to several S3 buckets. Currently, we only allow one bucket to be selected, so the developer has to gulp deploy for each bucket in turn. This is dull and prone to error.
I'd like to be able to select multiple buckets and deploy the same content to each. It's simple to select multiple buckets with inquirer.js/gulp-prompt, but not simple to generate arbitrary multiple S3 destinations from a single stream.
Our deploy task is based upon generator-webapp's S3 recipe. The recipe suggests gulp-rename to rewrite the path to write to a specific bucket. Currently our task looks like this:
gulp.task('deploy', ['build'], () => {
// get AWS creds
if (typeof(config.awsCreds) !== 'object') {
return console.error('No config.awsCreds settings found. See README');
}
var dirname;
const publisher = $.awspublish.create({
key: config.awsCreds.key,
secret: config.awsCreds.secret,
bucket: config.awsCreds.bucket
});
return gulp.src('dist/**/*.*')
.pipe($.prompt.prompt({
type: 'list',
name: 'dirname',
message: 'Using the ‘' + config.awsCreds.bucket + '’ bucket. Which hostname would you like to deploy to?',
choices: config.awsCreds.dirnames,
default: config.awsCreds.dirnames.indexOf(config.awsCreds.dirname)
}, function (res) {
dirname = res.dirname;
}))
.pipe($.rename(function(path) {
path.dirname = dirname + '/dist/' + path.dirname;
}))
.pipe(publisher.publish())
.pipe(publisher.cache())
.pipe($.awspublish.reporter());
});
It's hopefully obvious, but config.awsCreds might look something like:
awsCreds: {
dirname: 'default-bucket',
dirnames: ['default-bucket', 'other-bucket', 'another-bucket']
}
Gulp-rename rewrites the destination path to use the correct bucket.
We can select multiple buckets by using "checkbox" instead of "list" for the gulp-prompt options, but I'm not sure how to then deliver it to multiple buckets.
In a nutshell, if $.prompt returns an array of strings instead of a string, how can I write the source to multiple destinations (buckets) instead of a single bucket?
Please keep in mind that gulp.dest() is not used -- only gulp.awspublish() -- and we don't know how many buckets might be selected.
Never used S3, but if I understand your question correctly a file js/foo.js should be renamed to default-bucket/dist/js/foo.js and other-bucket/dist/js/foo.js when the checkboxes default-bucket and other-bucket are selected?
Then this should do the trick:
// additionally required modules
var path = require('path');
var through = require('through2').obj;
gulp.task('deploy', ['build'], () => {
if (typeof(config.awsCreds) !== 'object') {
return console.error('No config.awsCreds settings found. See README');
}
var dirnames = []; // array for selected buckets
const publisher = $.awspublish.create({
key: config.awsCreds.key,
secret: config.awsCreds.secret,
bucket: config.awsCreds.bucket
});
return gulp.src('dist/**/*.*')
.pipe($.prompt.prompt({
type: 'checkbox', // use checkbox instead of list
name: 'dirnames', // use different result name
message: 'Using the ‘' + config.awsCreds.bucket +
'’ bucket. Which hostname would you like to deploy to?',
choices: config.awsCreds.dirnames,
default: config.awsCreds.dirnames.indexOf(config.awsCreds.dirname)
}, function (res) {
dirnames = res.dirnames; // store array of selected buckets
}))
// use through2 instead of gulp-rename
.pipe(through(function(file, enc, done) {
dirnames.forEach((dirname) => {
var f = file.clone();
f.path = path.join(f.base, dirname, 'dist',
path.relative(f.base, f.path));
this.push(f);
});
done();
}))
.pipe(publisher.cache())
.pipe($.awspublish.reporter());
});
Notice the comments where I made changes from the code you posted.
What this does is use through2 to clone each file passing through the stream. Each file is cloned as many times as there were bucket checkboxes selected and each clone is renamed to end up in a different bucket.