AWS S3 - Move an object to a different folder - amazon-web-services

Is there any way to move an object to a different folder in the same bucket using the AWS SDK (preferably for .Net)?
Searching around all I can see is the suggestion of Copy to new location and Delete of the original (Which is easy enough via "CopyObjectRequest" and "DeleteObjectRequest") however I'm just wondering is this the only way?

Turns out you can use Amazon.S3.IO.S3FileInfo to get the object and then call the "MoveTo" method to move the object.
S3FileInfo currentObject = new S3FileInfo(S3Client,Bucket,CurrentKey);
S3FileInfo movedObject = currentObject.MoveTo(Bucket,NewKey);
EDIT: It turns out the above "MoveTo" method just performs a Copy and Delete behind the scenes anyway :)
For further information:
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/index.html?page=S3/TS3IOS3FileInfo.html&tocid=Amazon_S3_IO_S3FileInfo

As #user1527762 mentions S3FileInfo is not available in .NET Core version of the AWSSDK.S3 library (v 3.7.2.2).
If you are in .net core you can use the CopyObject to move an object from one bucket to another.
var s3Client = new AmazonS3Client(RegionEndpoint.USEast1);
var copyRequest = new CopyObjectRequest
{
SourceBucket = "source-bucket",
SourceKey = "fileName",
DestinationBucket = "dest-bucket",
DestinationKey = "fileName.jpg",
CannedACL = S3CannedACL.PublicRead,
};
var copyResponse = await s3Client.CopyObjectAsync(copyRequest);
CopyObject does not delete the source object so you would have to call delete like this:
DeleteObjectRequest request = new DeleteObjectRequest
{
BucketName = "source-bucket",
Key = "fileName.jpg"
};
await s3Client .DeleteObjectAsync(request);
Details on CopyObject here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/copy-object.html
And DeleteObject here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-objects.html

Related

How do I set a folder destination into a bucket using Google Cloud Storage?

I set the projectId, the bucket name and content type of a file, but I need te upload it to a particular folder into the destination bucket. How do I set the complete url?
This is my preliminary code:
I tried to add a directory inside bucketname or inside filename, but it doesn't work. It seems to be a parameter, but I don't know where do I have to set it.
var newObject = new Google.Apis.Storage.v1.Data.Object{
Bucket = "bucketName",
Name = "filename",
ContentType = "fileContentType"};
var credential = Google.Apis.Auth.OAuth2.GoogleCredential.FromJson(System.IO.File.ReadAllText("credentials.json"));
using (var storageClient = Google.Cloud.Storage.V1.StorageClient.Create(credential)){
using (var fileStream = new System.IO.FileStream(filePath, System.IO.FileMode.Open)){
var uploadObjectOptions = new Google.Cloud.Storage.V1.UploadObjectOptions();
await storageClient.UploadObjectAsync(newObject,fileStream,uploadObjectOptions,progress: null).ConfigureAwait(false);
}
}
return Ok();
You're adding the "folder" in the wrong place. Note that Google Cloud Storage doesn't have real folders or directories, instead it uses simulated directories which is really just an object with a prefix in its name.
bucket = bucket
object = folderName/objectname

delete folder from s3 nodejs

Hey guys I was trying to delete a folder from s3 with stuff in it but deleteObjects wasn't working so I found this script online and it works great my question is why does it work? Why do you have to listObjects when deleting a folder on s3 why cant I just pass it the objects name? Why doesn't It error when I attempt to delete the folder without listing the objects first.
first attempt (doesnt work)
var filePath2 = "templates/" + key + "/test/";
var toPush = { Key: filePath2 };
deleteParams.Delete.Objects.push(toPush);
console.log("deleteParams", deleteParams);
console.log("deleteParams.Delete", deleteParams.Delete);
const deleteResult = await s3.deleteObjects(deleteParams).promise();
console.log("deleteResult", deleteResult);
keep in mind folderPath2 is a folder that has other stuff in it I get no error but yet the catch isn't triggered and it says deleted and than the folder name.
second attempt (works)
async function deleteFromS3(bucket, path) {
const listParams = {
Bucket: bucket,
Prefix: path
};
const listedObjects = await s3.listObjectsV2(listParams).promise();
console.log("listedObjects", listedObjects);
if (listedObjects.Contents.length === 0) return;
const deleteParams = {
Bucket: bucket,
Delete: { Objects: [] }
};
listedObjects.Contents.forEach(({ Key }) => {
deleteParams.Delete.Objects.push({ Key });
});
console.log("deleteParams", deleteParams);
const deleteResult = await s3.deleteObjects(deleteParams).promise();
console.log("deleteResult", deleteResult);
if (listedObjects.IsTruncated && deleteResult)
await deleteFromS3(bucket, path);
}
than I call the function like so
const result = await deleteFromS3(myBucketName, folderPath);
Folders do not exist in Amazon S3. It is a flat object storage system, where the filename (Key) for each object contains the full path.
While Amazon S3 does support the concept of a Common Prefix, which can make things appear as though they are in folders/directories, folders do not actually exist.
For example, you could run a command like this:
aws s3 cp foo.txt s3://my-bucket/folder1/folder2/foo.txt
This would work even if the folders do not exist! It is merely storing an object with a Key of folder1/folder2/foo.txt.
If you were then to delete that object, the 'folder' would disappear because no object has it as a path. That is because the folder never actually existed.
Sometimes people want an empty folder to appear, so they create a zero-length object with the same name as the folder, eg folder1/folder2/.
So, your first program did not work because it deleted the 'folder', which has nothing to do with deleting the content of the folder (since there is no concept of 'content' of a folder).

Why does AWS S3 JavaScript Example Code save objects into a nameless subfolder?

I tried posting the following question on the AWS forum but I got the error message - "Your account is not ready for posting messages yet.", which is why I'm posting this here.
I am reading through the following example code for Amazon S3:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-example-photo-album-full.html
Whenever a new object is created, the example code nests the object within a nameless sub-folder like so:
function addPhoto(albumName) {
var files = document.getElementById('photoupload').files;
if (!files.length) {
return alert('Please choose a file to upload first.');
}
var file = files[0];
var fileName = file.name;
// Why is the photo placed in a nameless subfolder (below)?
var albumPhotosKey = encodeURIComponent(albumName) + '//';
...
Is there a particular reason / need for this?

S3 CopyObjectRequest between regions

I'm trying to copy some objects between 2 S3 buckets that are in different regions.
I have this:
static void PutToDestination(string filename)
{
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsAccessKeyId, awsSecretKey);
var client = new Amazon.S3.AmazonS3Client(credentials, Amazon.RegionEndpoint.GetBySystemName(awsS3RegionNameSource));
CopyObjectRequest request = new CopyObjectRequest();
request.SourceKey = filename;
request.DestinationKey = filename;
request.SourceBucket = awsS3BucketNameSource;
request.DestinationBucket = awsS3BucketNameDest;
try
{
CopyObjectResponse response = client.CopyObject(request);
}
catch (AmazonS3Exception x)
{
Console.WriteLine(x);
}
}
I get "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint."
There doesn't seem to be a way to set separate endpoints for source and destination.
Is there a different method I should look at?
Thanks
Did you try this? https://github.com/aws/aws-sdk-java-v2/issues/2212 (Provide destination region when building the S3:Client). Worked for me :)
I think, if you don't specify the region explicitly, then Copy between cross region should work.
See documentation.
However, the accelaration will not work and copy will be
COPY=GET+PUT
Here is explanation excerpt from documentation that says-
Important
Amazon S3 Transfer Acceleration does not support cross-region copies.
In your code, you are specifying the region like below.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
Instead initialize S3client without region like below.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider()).build();

How iterate for s3 file keys via CompletableFuture in aws sdk 2.0?

Consider example about sync version and old aws sdk:
public void syncIterateObjects() {
AmazonS3 s3Client = null;
String marker = null;
do {
ObjectListing objects = s3Client.listObjects(
new ListObjectsRequest()
.withBucketName("bucket")
.withPrefix("prefix")
.withMarker(marker)
.withDelimiter("/")
.withMaxKeys(100)
);
marker = objects.getNextMarker();
} while (marker != null);
}
Everything is clear - do/while do the work. Consider async example and awsd sdk 2.0
public void asyncIterateObjects() {
S3AsyncClient client = S3AsyncClient.builder().build()
final CompletableFuture<ListObjectsV2Response> response = client.listObjectsV2(ListObjectsV2Request.builder()
.delimiter("/")
.prefix("bucket")
.bucket("prefix")
.build())
.thenApply(Function.identity());
// what to do next ???
}
Ok I got CompletableFuture, but how run cycle to pass marker (nextContinuationToken in aws sdk 2.0) between previous and next Future?
You have only one future, notice the type is a future list of objects.
now you have to decide if you want to get the future or apply further transformations to it before getting it. After you get the future you can use the same method you used before with the while