AWS CLI sync from S3 to local fails with Errno 20 - amazon-web-services

Im using the following command
aws s3 sync s3://mys3bucket/ .
to download all the files AND directories from my s3 bucket "mys3bucket" into an empty folder. In this bucket is a directory called "albums". However instead of copying the files into a "albums" directory, I am receiving the following error message (an example)
download failed: s3://mys3bucket//albums/albums/5384 to albums/albums/5384 [Errno 20] Not a directory: u'/storage/mys3bucket//albums/albums/5384'
When I look in the folder to see what files, if any, did get copied into the albums folder, there is only 1 file in there called "albums" which when I edit it contains the text "{E40327AD-517B-46e8-A6D2-AF51BC263F50}".
This behavior is similar for all the other directories in this bucket. I see more of the error #20 by far than I see successful downloads. There is over 100GB of image files in the albums folder but not a single one is able to download.
Any suggestions?

I suspect the problem here is that you have both a 'directory' and a 'file' on S3 which have the same name. If you delete the 'file' from S3 then you should find that the directory will sync again.
I have found that this situation can occur when using desktop clients to view an S3 bucket, or something like s3sync.
http://www.witti.ws/blog/2013/12/03/transitioning-s3sync-aws-cli/

Related

AWS S3 - Use powershell to delete all files but keep the folders

I have a powershell script, that downloads all files form an S3 bucket, and then removes the files from the bucket. All the files I'm removing are stored in a subfolder in the S3 bucket, and I just want to delete the files but maintain the subfolders.
I'm currently using the following command to delete the files in S3 once the file has been downloaded from S3.
Remove-S3Object -BucketName $S3Bucket -Key $key -Force
My problem is that if it removes all the files in the subfolder, the subfolder is removed as well. Is there a way to remove the file, but keep the subfolder present using powerhsell. I believe I can do something like this,
aws s3 rm s3://<key_to_be_removed> --exclude "<subfolder_key>"
but not quite sure if that'll work.
I'm looking for the best way to accomplish this, and at the moment, my only option is to recreate the subfolder via the script if the subfolder not longer exists.
The only way to accomplish having an empty folder is to create a zero-length object which has the same name as the folder you want to keep. This is actually how the S3 console enables you to create an empty folder.
You can check this by running $ aws s3 ls s3://your-bucket/folderfoo/ and observing an output object having length of zero bytes.
See more on this topic here.
As already commented, S3 does not really have folders the way file systems do. The folders as presented by most S3 browsers are just generated based on the paths of the files/objects. If you upload an object/file named folder/file, the browsers will present folder as folder with file as a file in the folder. But technically, all that there is is the file/object folder/file. The folder does not exist on its own.
You can explicitly create a folder by creating an empty empty-named object with "in the folder": folder/. If you do that, it will appear the the folder exists even if there are no files in it. But if you do not do that, the virtual folder disappears once you remove all objects in the folder.
Now the question is whether your command removes even the empty named object representing the folder or not. I cannot tell that.

WinSCP put to S3 bucket "folder" when path doesn't exist and user doesn't have access to list objects

Using the WinSCP client, how can I load a CSV file to an S3 bucket with the following conditions:
The only S3 access I have is to put an object at this example path: s3://my_bucket/folder1/folder2
This logical directory doesn't exist unless I load the file - When I upload my file a Lambda function is fired to move the newly uploaded file. So this directory only "exists" for a split second on putobject.
I'm trying to build a WinSCP script like so:
open s3://[my_id]:[my_key]#s3.amazonaws.com -rawsettings S3DefaultRegion="[my_region]"
put "[source_dir]/file1.csv" /[my_bucket]/[folder1]/[folder2]/
put "[source_dir]/file2.csv" /[my_bucket]/[folder1]/[folder2]/
exit
but this returns an error:
Connecting to host...
Access denied.
Access Denied
Connection failed.
I updated the open statement to include the bucket/prefix
open s3://[my_id]:[my_key]#s3.amazonaws.com/[my_bucket]/[folder1]/[folder2] -rawsettings S3DefaultRegion="[my_region]"
and get this error:
Connecting to host...
File or folder '[my_bucket]/[folder1]/[folder2]' does not exist.
Connection failed.
I simply want to load a file to [my_bucket]/[folder1]/[folder2] in the same was as this AWS CLI script, which works without issue:
aws s3 cp [source_dir]/file1.csv s3://[my_bucket]/[folder1]/[folder2]
aws s3 cp [source_dir]/file2.csv s3://[my_bucket]/[folder1]/[folder2]

Access denied when trying to upload audio files to aws s3 bucket

I'm trying to upload audio files to a folder in my s3 bucket. I'm doing this by dragging and dropping from my laptop and hitting the upload button once I have dropped the last file. Some of the files failed to upload and instead gave me an error message saying
Access Denied. You don't have permissions to upload files and folders.
How do I fix that?
Adding to Frank Din's answer, I was able to upload a folder's 80+ images at once by selecting "Add Folder" instead of drag-and-dropping all the images at once.
I was eventually able to upload all the audio files.
I think the problem in my case was that I was trying to upload all the files practically at the same time by just dragging and dropping them all in the same breath.
I fixed that by uploading each file one at a time and only after the previous file was done uploading.

How to auto refresh remote list, after uploading a local folder to a Storj bucket through FileZilla?

In FileZilla client, when a local folder is dragged and dropped into a remote directory, which part of the FileZilla code is recursively sending command to transfer (upload) all local files and sub-folders (within the selected local folder) to the remote end?
My main purpose is to insert command to either list or refresh the remote directory, once the upload is complete. Although this is being done in ftp and sftp protocols, but I am not able to do so for storj feature.
I have tried the including the "list" or refersh commands at the following points in different codes:
at the end of the "put" command within /src/storj/fzstorj.cpp file
after the "Transfers finished" notification in void CQueueView::ActionAfter(bool warned) function in /src/interface/QueueView.cpp file
Reason: this notification is displayed when all files and subfolders of a selected local folder have been uploaded to a Storj bucket.
I also tried tracking files that take part in the process, mainly those within /src/engine/storj folder, like, file_transfer.cpp sending "put" command through int CStorjFileTransferOpData::Send()function
This did not help much.
While checking who is giving command to the storj engine, I observed it is being done by calling void CCommandQueue::ProcessCommand(CCommand *pCommand, CCommandQueue::command_origin origin) in /src/interface/commandqueue.cpp
Expected output is autorefreshing of Storj bucket or upload path, when all desired files and sub-folders are uploaded from the local end through FileZilla client.
Any hint towards the solution would be of great help to me.
Thank You!

AWS Pipeline: Staging local files to S3 failed. The request signature we calculated does not match the signature you provided

Here's my setup:
I am trying to copy files from an external Webserver to a S3 Bucket using the DataPipeline.
To do this I'm using the ShellCommandActivity which uses a script to Download the files to the Output-Bucket specified in the Pipeline. In the script I use the environment variable ${OUTPUT1_STAGING_DIR} to adress the bucket. Of course I turned 'staging' to true in my pipeline.
When the script finishes, the state of the Activity becomes "FAILED" with following Error:
Staging local files to S3 failed. The request signature we calculated does not match the signature you provided. Check your key and signing method
When I look in the stdout file, I can see that my script finished sucessfully, only the staging to the bucket did not work.
I recon this could be an permission problem with the bucket but I have no idea which things I have to change!
I came across some discussions, where people got this error because the path to the bucket was configured wrong, so this is how I did it in the Pipeline Datanode Directory Path:
s3://testBucket
Is this correct?
I would appreciate any help here!
The problem was the datanode directory Path: It cannot be just a bucket, but HAS to be a directory inside a bucket.
Like this:
s3://testBucket/test
Great work with the error messages, Amazon!