Downloading folders from Google Cloud Storage Bucket - google-cloud-platform

I'm new to Google Cloud Platform.I have trained my model on datalab and saved the model folder on cloud storage in my bucket. I'm able to download the existing files in the bucket to my local machine by doing right-click on the file --> save as link. But when I try to download the folder by the same procedure as above, I'm not getting the folder but its image. Is there anyway I can download the whole folder and its contents as it is? Is there any gsutil command to copy folders from cloud storage to local directory?

You can find docs on the gsutil tool here and for your question more specifically here.
The command you want to use is:
gsutil cp -r gs://bucket/folder .

This is how you can download a folder from Google Cloud Storage Bucket
Run the following commands to download it from the bucket storage to your Google Cloud Console local path
gsutil -m cp -r gs://{bucketname}/{folderPath} {localpath}
once you run that command, confirm that your folder is on the localpath by running ls command to list files and directories on the localpath
Now zip your folder by running the command below
zip -r foldername.zp yourfolder/*
Once the zip process is done, click on the more dropdown menu at the right side of the Google Cloud Console,
then select "Download file" Option. You will be prompted to enter the name of the file that you want to download, enter the name of the zip file - "foldername.zp"

Prerequisites:
Google Cloud SDK is installed and initialized ($ glcoud init)
Command:
gsutil -m cp -r gs://bucket-name .
This will copy all of the files using multithread which is faster. I found that the "dir" command instructed for use in the official Gsutil Docs did not work.

If you are downloading using data from google cloud storage using python and want to maintain same folder structure , follow this code i wrote in python.
OPTION 1
from google.cloud import storage
def findOccurrences(s, ch): # to find position of '/' in blob path ,used to create folders in local storage
return [i for i, letter in enumerate(s) if letter == ch]
def download_from_bucket(bucket_name, blob_path, local_path):
# Create this folder locally
if not os.path.exists(local_path):
os.makedirs(local_path)
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blobs=list(bucket.list_blobs(prefix=blob_path))
startloc = 0
for blob in blobs:
startloc = 0
folderloc = findOccurrences(blob.name.replace(blob_path, ''), '/')
if(not blob.name.endswith("/")):
if(blob.name.replace(blob_path, '').find("/") == -1):
downloadpath=local_path + '/' + blob.name.replace(blob_path, '')
logging.info(downloadpath)
blob.download_to_filename(downloadpath)
else:
for folder in folderloc:
if not os.path.exists(local_path + '/' + blob.name.replace(blob_path, '')[startloc:folder]):
create_folder=local_path + '/' +blob.name.replace(blob_path, '')[0:startloc]+ '/' +blob.name.replace(blob_path, '')[startloc:folder]
startloc = folder + 1
os.makedirs(create_folder)
downloadpath=local_path + '/' + blob.name.replace(blob_path, '')
blob.download_to_filename(downloadpath)
logging.info(blob.name.replace(blob_path, '')[0:blob.name.replace(blob_path, '').find("/")])
logging.info('Blob {} downloaded to {}.'.format(blob_path, local_path))
bucket_name = 'google-cloud-storage-bucket-name' # do not use gs://
blob_path = 'training/data' # blob path in bucket where data is stored
local_dir = 'local-folder name' #trainingData folder in local
download_from_bucket(bucket_name, blob_path, local_dir)
OPTION 2: using gsutil sdk
One more option of doing it via python program is below.
def download_bucket_objects(bucket_name, blob_path, local_path):
# blob path is bucket folder name
command = "gsutil cp -r gs://{bucketname}/{blobpath} {localpath}".format(bucketname = bucket_name, blobpath = blob_path, localpath = local_path)
os.system(command)
return command
OPTION 3 - No python ,directly using terminal and google SDK
Prerequisites: Google Cloud SDK is installed and initialized ($ glcoud init)
Refer to below link for commands:
https://cloud.google.com/storage/docs/gsutil/commands/cp

gsutil -m cp -r gs://bucket-name "{path to local existing folder}"
Works for sure.

As of Mar. 2022, the gs path needs to be double quoted. You can actually find the proper downloading command by navigating to the bucket root, check one of the dir and click Download on the top.

Here's the code I wrote.
This Will download the complete directory structure to your VM/local storage .
from google.cloud import storage
import os
bucket_name = "ar-data"
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
dirName='Data_03_09/' #***folder in bucket whose content you want to download
blobs = bucket.list_blobs(prefix = dirName)#, delimiter = '/')
destpath=r'/home/jupyter/DATA_test/' #***path on your vm/local where you want to download the bucket directory
for blob in blobs:
#print(blob.name.lstrip(dirName).split('/'))
currpath=destpath
if not os.path.exists(os.path.join(destpath,'/'.join(blob.name.lstrip(dirName)).split('/')[:-1])):
for n in blob.name.lstrip(dirName).split('/')[:-1]:
currpath=os.path.join(currpath,n)
if not os.path.exists(currpath):
print('creating directory- ', n , 'On path-', currpath)
os.mkdir(currpath)
print("downloading ... ",blob.name.lstrip(dirName))
blob.download_to_filename(os.path.join(destpath,blob.name.lstrip(dirName)))
or simply use in terminal :
gsutil -m cp -r gs://{bucketname}/{folderPath} {localpath}

Related

How to know path "gs://bucket1/folder_x" existing or not in GCP bucket

Is there a '''gsutil''' command can tell me if the path '''gs://bucket1/folder1_x/folder2_y/''' existing or not? Is there a '''ping''' command in gsutil?
I use Jenkins parameters folder_x and folder_y which value input by user, and joined by pipeline. Currently, if the dir does exist, the pipeline will show success. But if the path is wrong, the pipeline will be interrupted and shows failure.
Tried use gsutil stat and gsutil -q stat, it can test '''gs://bucket1/folder1_x/folder2_y/file1''', but not for dir.
'''groovy
pipeline {
stages {
stage('Check existing dirs') {
steps {
script{
if (params['Action'] =="List_etl-output") {
def Output_Data="${params['Datasource']}".toString().split(",").collect{"\"" + it + "\""}
def Output_Stage="${params['Etl_Output_Stage']}".toString().split(",").collect{"\"" + it + "\""}
for (folder1 in Output_Data) {
for (folder2 in Output_Stage) {
sh(script: """
gsutil ls -r gs://bucket1/*/$Data/$Stage
""" )
}
}
}
}
}
}
}
}
'''
I was use gsutil to check if the path gs://bucket1/*/$Data/$Stage available or not. The $Data and $Stage are given by user input, the Jenkins pipeline interrupted when the path not available. I want gsutil can skip the wrong path when it's not available.
The directory doesn't exist in Cloud Storage. It's a graphical representation. All the blob are stored at to bucket root and their name is composed of the full path (with / that you interpret as directory, but not). It's also for this that you can only search on prefix.
To answer your question, you can use this latest feature: search on the prefix. If there is 1 element, the folder exist BECAUSE there is at least 1 blob with this prefix. Here an example in Python (I don't know your language, I can adapt it in several language if you need)
from google.cloud import storage
client = storage.Client()
bucket = client.get_bucket('bucket1')
if len(list(bucket.list_blobs(prefix='folder_x/'))):
print('there is a file in the "directory"')
else:
print('No file with this path, so no "directory"')
Here the example in Groovy
import com.google.cloud.storage.Bucket
import com.google.cloud.storage.Storage
import com.google.cloud.storage.StorageOptions
Storage storage = StorageOptions.getDefaultInstance().service
Bucket bucket = storage.get("bucket1")
System.out.println(bucket.list(Storage.BlobListOption.prefix("folder_x/")).iterateAll().size())

How to get URI of a blob in a google cloud storage (Python)

If I have a Blob object how can I get the URI (gs://...)?
The documentation says I can use self_link property to get the URI, but it returns the https URL instead (https://googleapis.com...)
I am using python client library for cloud storage.
Thank you
Since you are not sharing with us how exactly are you trying to achieve this I did a quick script in Python to get this info
There is no specific method in blob to get the URI as gs:// in Python but you can try to script this by using the path_helper
def get_blob_URI():
"""Prints out a bucket's labels."""
# bucket_name = 'your-bucket-name'
storage_client = storage.Client()
bucket_name = 'YOUR_BUCKET'
blob_name = 'YOUR_OBJECT_NAME'
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(blob_name)
link = blob.path_helper(bucket_name, blob_name)
pprint.pprint('gs://' + link)
If you want to use the gsutil tool you can also get all the gs:// Uris of a bucket using the command gsutil ls gs://bucket

List buckets that match a bucket label with gsutil

I have my google cloud storage buckets labeled
I can't find anything in the docs on how to do a gsutil ls but only filter buckets with a specific label- is this possible?
Just had a use case where I wanted to list all buckets with a specific label. The accepted answer using subprocess was noticeably slow for me. Here is my solution using the Python client library for Cloud Storage:
from google.cloud import storage
def list_buckets_by_label(label_key, label_value):
# List out buckets in your default project
client = storage.Client()
buckets = client.list_buckets() # Iterator
# Only return buckets where the label key/value match inputs
output = list()
for bucket in buckets:
if bucket.labels.get(label_key) == label_value:
output.append(bucket.name)
return output
Nowadays is not possible to do what you want in one single step. You can do it in 3 steps:
getting all the buckets of your GCP project.
Get the labels of every bucket.
Do the gsutil ls of every bucket that accomplish your criteria.
This is my python 3 code that I did for you.
import subprocess
out = subprocess.getoutput("gsutil ls")
for line in out.split('\n'):
label = subprocess.getoutput("gsutil label get "+line)
if "YOUR_LABEL" in str(label):
gsout = subprocess.getoutput("gsutil ls "+line)
print("Files in "+line+":\n")
print(gsout)
A bash only solution:
function get_labeled_bucket {
# list all of the buckets for the current project
for b in $(gsutil ls); do
# find the one with your label
if gsutil label get "${b}" | grep -q '"key": "value"'; then
# and return its name
echo "${b}"
fi
done
}
The section '"key": "value"' is just a string, replace with your key and your value. Call the function with LABELED_BUCKET=$(get_labeled_bucket)
In my opinion, making a bash function return more than one value is more trouble than it is worth. If you need to work with multiple buckets then I would replace the echo with the code that needs to run.
from google.cloud import storage
client = storage.Client()
for blob in client.list_blobs('bucketname', prefix='xjc/folder'):
print(str(blob))

Get size of the bucket based on storage classes in Google cloud storage

I would like to get the size of the bucket based on storage class . I've added rules to bucket to change the storage class of the files based on age of the file.
I've used below commands
gsutil du -sh gs://[bucket-name]
To get Meta-data :
gsutil ls -L gs://[bucket-name]
To set ACL to bucket
gsutil lifecycle set life-cycle.json gs://[bucket-name]
Please any one help on this to resolve my issue
Edit:
I have filed a Feature Request for this on the Public Issue Tracker. In the meantime, the code below can be used.
I believe there is no gsutil command that can show you the total size by storage class for a GCS bucket.
However, using the Cloud Storage Client Libraries for Python, I made a script that does what you’re asking for:
from google.cloud import storage
import math
### SET THESE VARIABLES ###
PROJECT_ID = ""
CLOUD_STORAGE_BUCKET = ""
###########################
def _get_storage_client():
return storage.Client(
project=PROJECT_ID)
def convert_size(size_bytes):
if size_bytes == 0:
return "0 B"
size_name = ("B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB")
i = int(math.floor(math.log(size_bytes, 1024)))
p = math.pow(1024, i)
s = round(size_bytes / p, 2)
return "%s %s" % (s, size_name[i])
def size_by_class():
client = _get_storage_client()
bucket = client.bucket(CLOUD_STORAGE_BUCKET)
blobs = bucket.list_blobs()
size_multi_regional = size_regional = size_nearline = size_coldline = 0
for blob in blobs:
if blob.storage_class == "MULTI_REGIONAL":
size_multi_regional = size_multi_regional + blob.size
if blob.storage_class == "REGIONAL":
size_regional = size_regional + blob.size
if blob.storage_class == "NEARLINE":
size_nearline = size_nearline + blob.size
if blob.storage_class == "COLDLINE":
size_coldline = size_coldline + blob.size
print("MULTI_REGIONAL: "+str(convert_size(size_multi_regional))+"\n"+
"REGIONAL: "+str(convert_size(size_regional)+"\n"+
"NEARLINE: "+str(convert_size(size_nearline))+"\n"+
"COLDLINE: "+str(convert_size(size_coldline))
))
if __name__ == '__main__':
size_by_class()
To run this program from the Google Cloud Shell, make sure you have previously installed the Client Library for Python with:
pip install --upgrade google-cloud-storage
And in order to provide authentication credentials to the application code, you must point the environment variable GOOGLE_APPLICATION_CREDENTIALS to the location of the JSON file that contains your service account key:
export `GOOGLE_APPLICATION_CREDENTIALS`="/home/user/Downloads/[FILE_NAME].json"
Before running the script, set PROJECT_ID to the ID of your Project, and CLOUD_STORAGE_BUCKET to the name of your GCS Bucket.
Run the script with python main.py. Output should be something like:
MULTI_REGIONAL: 1.0 GB
REGIONAL: 300 MB
NEARLINE: 200 MB
COLDLINE: 10 MB

How to rename files and folder in Amazon S3?

Is there any function to rename files and folders in Amazon S3? Any related suggestions are also welcome.
I just tested this and it works:
aws s3 --recursive mv s3://<bucketname>/<folder_name_from> s3://<bucket>/<folder_name_to>
There is no direct method to rename a file in S3. What you have to do is copy the existing file with a new name (just set the target key) and delete the old one.
aws s3 cp s3://source_folder/ s3://destination_folder/ --recursive
aws s3 rm s3://source_folder --recursive
You can use the AWS CLI commands to mv the files
You can either use AWS CLI or s3cmd command to rename the files and folders in AWS S3 bucket.
Using S3cmd, use the following syntax to rename a folder,
s3cmd --recursive mv s3://<s3_bucketname>/<old_foldername>/ s3://<s3_bucketname>/<new_folder_name>
Using AWS CLI, use the following syntax to rename a folder,
aws s3 --recursive mv s3://<s3_bucketname>/<old_foldername>/ s3://<s3_bucketname>/<new_folder_name>
I've just got this working. You can use the AWS SDK for PHP like this:
use Aws\S3\S3Client;
$sourceBucket = '*** Your Source Bucket Name ***';
$sourceKeyname = '*** Your Source Object Key ***';
$targetBucket = '*** Your Target Bucket Name ***';
$targetKeyname = '*** Your Target Key Name ***';
// Instantiate the client.
$s3 = S3Client::factory();
// Copy an object.
$s3->copyObject(array(
'Bucket' => $targetBucket,
'Key' => $targetKeyname,
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
));
http://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectUsingPHP.html
This is now possible for Files, select the file then select Actions > Rename in the GUI.
To rename a folder, you instead have to create a new folder, and select the contents of the old one and copy/paste it across (Under "Actions" again)
We have 2 ways by which we can rename a file on AWS S3 storage -
1 .Using the CLI tool -
aws s3 --recursive mv s3://bucket-name/dirname/oldfile s3://bucket-name/dirname/newfile
2.Using SDK
$s3->copyObject(array(
'Bucket' => $targetBucket,
'Key' => $targetKeyname,
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",));
To rename a folder (which is technically a set of objects with a common prefix as key) you can use the aws CLI move command with --recursive option.
aws s3 mv s3://bucket/old_folder s3://bucket/new_folder --recursive
There is no way to rename a folder through the GUI, the fastest (and easiest if you like GUI) way to achieve this is to perform an plain old copy. To achieve this: create the new folder on S3 using the GUI, get to your old folder, select all, mark "copy" and then navigate to the new folder and choose "paste". When done, remove the old folder.
This simple method is very fast because it is copies from S3 to itself (no need to re-upload or anything like that) and it also maintains the permissions and metadata of the copied objects like you would expect.
Here's how you do it in .NET, using S3 .NET SDK:
var client = new Amazon.S3.AmazonS3Client(_credentials, _config);
client.CopyObject(oldBucketName, oldfilepath, newBucketName, newFilePath);
client.DeleteObject(oldBucketName, oldfilepath);
P.S. try to use use "Async" versions of the client methods where possible, even though I haven't done so for readability
This works for renaming the file in the same folder
aws s3 mv s3://bucketname/folder_name1/test_original.csv s3://bucket/folder_name1/test_renamed.csv
Below is the code example to rename file on s3. My file was part-000* because of spark o/p file, then i copy it to another file name on same location and delete the part-000*:
import boto3
client = boto3.client('s3')
response = client.list_objects(
Bucket='lsph',
MaxKeys=10,
Prefix='03curated/DIM_DEMOGRAPHIC/',
Delimiter='/'
)
name = response["Contents"][0]["Key"]
copy_source = {'Bucket': 'lsph', 'Key': name}
client.copy_object(Bucket='lsph', CopySource=copy_source,
Key='03curated/DIM_DEMOGRAPHIC/'+'DIM_DEMOGRAPHIC.json')
client.delete_object(Bucket='lsph', Key=name)
File and folder are in fact objects in S3. You should use PUT OBJECT COPY to rename them. See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
rename all the *.csv.err files in the <<bucket>>/landing dir into *.csv files with s3cmd
export aws_profile='foo-bar-aws-profile'
while read -r f ; do tgt_fle=$(echo $f|perl -ne 's/^(.*).csv.err/$1.csv/g;print'); \
echo s3cmd -c ~/.aws/s3cmd/$aws_profile.s3cfg mv $f $tgt_fle; \
done < <(s3cmd -r -c ~/.aws/s3cmd/$aws_profile.s3cfg ls --acl-public --guess-mime-type \
s3://$bucket | grep -i landing | grep csv.err | cut -d" " -f5)
As answered by Naaz direct renaming of s3 is not possible.
i have attached a code snippet which will copy all the contents
code is working just add your aws access key and secret key
here's what i did in code
-> copy the source folder contents(nested child and folders) and pasted in the destination folder
-> when the copying is complete, delete the source folder
package com.bighalf.doc.amazon;
import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.util.List;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3ObjectSummary;
public class Test {
public static boolean renameAwsFolder(String bucketName,String keyName,String newName) {
boolean result = false;
try {
AmazonS3 s3client = getAmazonS3ClientObject();
List<S3ObjectSummary> fileList = s3client.listObjects(bucketName, keyName).getObjectSummaries();
//some meta data to create empty folders start
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(0);
InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
//some meta data to create empty folders end
//final location is the locaiton where the child folder contents of the existing folder should go
String finalLocation = keyName.substring(0,keyName.lastIndexOf('/')+1)+newName;
for (S3ObjectSummary file : fileList) {
String key = file.getKey();
//updating child folder location with the newlocation
String destinationKeyName = key.replace(keyName,finalLocation);
if(key.charAt(key.length()-1)=='/'){
//if name ends with suffix (/) means its a folders
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, destinationKeyName, emptyContent, metadata);
s3client.putObject(putObjectRequest);
}else{
//if name doesnot ends with suffix (/) means its a file
CopyObjectRequest copyObjRequest = new CopyObjectRequest(bucketName,
file.getKey(), bucketName, destinationKeyName);
s3client.copyObject(copyObjRequest);
}
}
boolean isFodlerDeleted = deleteFolderFromAws(bucketName, keyName);
return isFodlerDeleted;
} catch (Exception e) {
e.printStackTrace();
}
return result;
}
public static boolean deleteFolderFromAws(String bucketName, String keyName) {
boolean result = false;
try {
AmazonS3 s3client = getAmazonS3ClientObject();
//deleting folder children
List<S3ObjectSummary> fileList = s3client.listObjects(bucketName, keyName).getObjectSummaries();
for (S3ObjectSummary file : fileList) {
s3client.deleteObject(bucketName, file.getKey());
}
//deleting actual passed folder
s3client.deleteObject(bucketName, keyName);
result = true;
} catch (Exception e) {
e.printStackTrace();
}
return result;
}
public static void main(String[] args) {
intializeAmazonObjects();
boolean result = renameAwsFolder(bucketName, keyName, newName);
System.out.println(result);
}
private static AWSCredentials credentials = null;
private static AmazonS3 amazonS3Client = null;
private static final String ACCESS_KEY = "";
private static final String SECRET_ACCESS_KEY = "";
private static final String bucketName = "";
private static final String keyName = "";
//renaming folder c to x from key name
private static final String newName = "";
public static void intializeAmazonObjects() {
credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_ACCESS_KEY);
amazonS3Client = new AmazonS3Client(credentials);
}
public static AmazonS3 getAmazonS3ClientObject() {
return amazonS3Client;
}
}
In the AWS console, if you navigate to S3, you will see your folders listed. If you navigate to the folder, you will see the object (s) listed. right click and you can rename. OR, you can check the box in front of your object, then from the pull down menu named ACTIONS, you can select rename. Just worked for me, 3-31-2019
If you want to rename a lot of files from an s3 folder you can run the following script.
FILES=$(aws s3api list-objects --bucket your_bucket --prefix 'your_path' --delimiter '/' | jq -r '.Contents[] | select(.Size > 0) | .Key' | sed '<your_rename_here>')
for i in $FILES
do
aws s3 mv s3://<your_bucket>/${i}.gz s3://<your_bucket>/${i}
done
What I did is create a new folder and move older files object to the new folder.
There are a lot of 'issues' with folder structures in s3 it seems as the storage is flat.
I have a Django project where I needed the ability to rename a folder but still keep the directory structure in-tact, meaning empty folders would need to be copied and stored in the renamed directory as well.
aws cli is great but neither cp or sync or mv copied empty folders (i.e. files ending in '/') over to the new folder location, so I used a mixture of boto3 and the aws cli to accomplish the task.
More or less I find all folders in the renamed directory and then use boto3 to put them in the new location, then I cp the data with aws cli and finally remove it.
import threading
import os
from django.conf import settings
from django.contrib import messages
from django.core.files.storage import default_storage
from django.shortcuts import redirect
from django.urls import reverse
def rename_folder(request, client_url):
"""
:param request:
:param client_url:
:return:
"""
current_property = request.session.get('property')
if request.POST:
# name the change
new_name = request.POST['name']
# old full path with www.[].com?
old_path = request.POST['old_path']
# remove the query string
old_path = ''.join(old_path.split('?')[0])
# remove the .com prefix item so we have the path in the storage
old_path = ''.join(old_path.split('.com/')[-1])
# remove empty values, this will happen at end due to these being folders
old_path_list = [x for x in old_path.split('/') if x != '']
# remove the last folder element with split()
base_path = '/'.join(old_path_list[:-1])
# # now build the new path
new_path = base_path + f'/{new_name}/'
# remove empty variables
# print(old_path_list[:-1], old_path.split('/'), old_path, base_path, new_path)
endpoint = settings.AWS_S3_ENDPOINT_URL
# # recursively add the files
copy_command = f"aws s3 --endpoint={endpoint} cp s3://{old_path} s3://{new_path} --recursive"
remove_command = f"aws s3 --endpoint={endpoint} rm s3://{old_path} --recursive"
# get_creds() is nothing special it simply returns the elements needed via boto3
client, resource, bucket, resource_bucket = get_creds()
path_viewing = f'{"/".join(old_path.split("/")[1:])}'
directory_content = default_storage.listdir(path_viewing)
# loop over folders and add them by default, aws cli does not copy empty ones
# so this is used to accommodate
folders, files = directory_content
for folder in folders:
new_key = new_path+folder+'/'
# we must remove bucket name for this to work
new_key = new_key.split(f"{bucket}/")[-1]
# push this to new thread
threading.Thread(target=put_object, args=(client, bucket, new_key,)).start()
print(f'{new_key} added')
# # run command, which will copy all data
os.system(copy_command)
print('Copy Done...')
os.system(remove_command)
print('Remove Done...')
# print(bucket)
print(f'Folder renamed.')
messages.success(request, f'Folder Renamed to: {new_name}')
return redirect(request.META.get('HTTP_REFERER', f"{reverse('home', args=[client_url])}"))
S3DirectoryInfo has a MoveTo method that will move one directory into another directory, such that the moved directory will become a subdirectory of the other directory with the same name as it originally had.
The extension method below will move one directory to another directory, i.e. the moved directory will become the other directory. What it actually does is create the new directory, move all the contents of the old directory into it, and then delete the old one.
public static class S3DirectoryInfoExtensions
{
public static S3DirectoryInfo Move(this S3DirectoryInfo fromDir, S3DirectoryInfo toDir)
{
if (toDir.Exists)
throw new ArgumentException("Destination for Rename operation already exists", "toDir");
toDir.Create();
foreach (var d in fromDir.EnumerateDirectories())
d.MoveTo(toDir);
foreach (var f in fromDir.EnumerateFiles())
f.MoveTo(toDir);
fromDir.Delete();
return toDir;
}
}
There is one software where you can play with the s3 bucket for performing different kinds of operation.
Software Name: S3 Browser
S3 Browser is a freeware Windows client for Amazon S3 and Amazon CloudFront. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. Amazon CloudFront is a content delivery network (CDN). It can be used to deliver your files using a global network of edge locations.
If it's only single time then you can use the command line to perform these operations:
(1) Rename the folder in the same bucket:
s3cmd --access_key={access_key} --secret_key={secret_key} mv s3://bucket/folder1/* s3://bucket/folder2/
(2) Rename the Bucket:
s3cmd --access_key={access_key} --secret_key={secret_key} mv s3://bucket1/folder/* s3://bucket2/folder/
Where,
{access_key} = Your valid access key for s3 client
{secret_key} = Your valid scret key for s3 client
It's working fine without any problem.
Thanks