I'm trying to upload audio files to a folder in my s3 bucket. I'm doing this by dragging and dropping from my laptop and hitting the upload button once I have dropped the last file. Some of the files failed to upload and instead gave me an error message saying
Access Denied. You don't have permissions to upload files and folders.
How do I fix that?
Adding to Frank Din's answer, I was able to upload a folder's 80+ images at once by selecting "Add Folder" instead of drag-and-dropping all the images at once.
I was eventually able to upload all the audio files.
I think the problem in my case was that I was trying to upload all the files practically at the same time by just dragging and dropping them all in the same breath.
I fixed that by uploading each file one at a time and only after the previous file was done uploading.
Related
We are using an S3 bucket to hold customer zip files they created and made ready for them to download. We are using CloudFront only to handle the SSL. We have caching disabled.
The customer receives an email to download their zip file, and that works great. The S3 lifecycle removes the file after 2 weeks. Now, if they add more photos to their account and re-request their zip file, it overwrites the current zip file with the new version. So the link is exactly the same. But when they download, it's the previous zip file, not the new one.
Additionally after the two weeks, the file is removed and they try to download they get an error that basically says they need to login and re-request their photos. So they generate a zip file but their link still gives them the error message.
I could have the lambda that creates the zip file invalidate the file when it creates it, but I didn't think I needed to invalidate since we aren't caching?
Below is the screenshot of the caching policy I have selected in CloudFront
I have a 121MB MP3 file I am trying to upload to my AWS S3 so I can process it via Amazon Transcribe.
The MP3 file comes from an MP4 file I stripped the audio from using FFmpeg.
When I try to upload the MP3, using the S3 object upload UI in the AWS console, I receive the below error:
InvalidPart
One or more of the specified parts could not be found. the part may not have been uploaded, or the specified entity tag may not match the part's entity tag
The error makes reference to the MP3 being a multipart file and how the "next" part is missing but it's not a multipart file.
I have re-run the MP4 file through FFmpeg 3 times in case the 1st file was corrupt, but that has not fixed anything.
I have searched a lot on Stackoverflow and have not found a similar case where anyone has uploaded a single 5MB+ file that has received the error I am.
I've also crossed out FFmpeg being the issue by saving the audio using VLC as an MP3 file but receive the exact same error.
What is the issue?
Here's the console in case it helps:
121MB is below the 160 GB S3 console single object upload limit, the 5GB single object upload limit using the REST API / AWS SDKs as well as the 5TB limit on multipart file upload so I really can't see the issue.
Considering the file exists & you have a stable internet-connected (no corrupted uploads), you may have incomplete multipart upload parts in your bucket somehow which may be conflicting with the upload for whatever reason so either follow this guide to remove them and try again or try creating a new folder/bucket and re-uploading again.
You may also have a browser caching issue/extension conflict so try incognito (with extensions disabled) or another browser if re-uploading to another bucket/folder doesn't work.
Alternatively, try the AWS CLI s3 cp command or a quick "S3 file upload" application in a supported SDK language to make sure that it's not a console UI issue.
In FileZilla client, when a local folder is dragged and dropped into a remote directory, which part of the FileZilla code is recursively sending command to transfer (upload) all local files and sub-folders (within the selected local folder) to the remote end?
My main purpose is to insert command to either list or refresh the remote directory, once the upload is complete. Although this is being done in ftp and sftp protocols, but I am not able to do so for storj feature.
I have tried the including the "list" or refersh commands at the following points in different codes:
at the end of the "put" command within /src/storj/fzstorj.cpp file
after the "Transfers finished" notification in void CQueueView::ActionAfter(bool warned) function in /src/interface/QueueView.cpp file
Reason: this notification is displayed when all files and subfolders of a selected local folder have been uploaded to a Storj bucket.
I also tried tracking files that take part in the process, mainly those within /src/engine/storj folder, like, file_transfer.cpp sending "put" command through int CStorjFileTransferOpData::Send()function
This did not help much.
While checking who is giving command to the storj engine, I observed it is being done by calling void CCommandQueue::ProcessCommand(CCommand *pCommand, CCommandQueue::command_origin origin) in /src/interface/commandqueue.cpp
Expected output is autorefreshing of Storj bucket or upload path, when all desired files and sub-folders are uploaded from the local end through FileZilla client.
Any hint towards the solution would be of great help to me.
Thank You!
I am trying to build a workflow to update files on a S3 bucket and invalidate them on Cloudfront so it gets removed from its cache.
These files consist of JS, CSS, images, media, etc. I am using grunt to minify them.
This is what an ideal scenario in my opinion would be:
run grunt on the latest codebase to prepare for distribution;
upload new files from step 1 to S3 using aws client tools;
invalidate these new files on Cloudfront using aws client tools.
The problem I'm facing is that, on step 1, the minified files all have a newer timestamp than what's on S3, so when I run aws s3 sync, it will try to upload all the files back to S3. I just want the modified files to be uploaded.
I'm open to suggestions on changing the entire workflow as well. Any suggestions?
s3cmd would be able to solve the problem with uploading only those files which have been modified. Rather than checking for timestamp changes , it checks for content changes (internally it assigns MD5 hashes to each file and then checks the local version of the file with the one present at S3, uploading only those files whose MD5 hashes mismatch)
It has many command line options including options to invalidate uploaded files from CloudFront distribution
Im using the following command
aws s3 sync s3://mys3bucket/ .
to download all the files AND directories from my s3 bucket "mys3bucket" into an empty folder. In this bucket is a directory called "albums". However instead of copying the files into a "albums" directory, I am receiving the following error message (an example)
download failed: s3://mys3bucket//albums/albums/5384 to albums/albums/5384 [Errno 20] Not a directory: u'/storage/mys3bucket//albums/albums/5384'
When I look in the folder to see what files, if any, did get copied into the albums folder, there is only 1 file in there called "albums" which when I edit it contains the text "{E40327AD-517B-46e8-A6D2-AF51BC263F50}".
This behavior is similar for all the other directories in this bucket. I see more of the error #20 by far than I see successful downloads. There is over 100GB of image files in the albums folder but not a single one is able to download.
Any suggestions?
I suspect the problem here is that you have both a 'directory' and a 'file' on S3 which have the same name. If you delete the 'file' from S3 then you should find that the directory will sync again.
I have found that this situation can occur when using desktop clients to view an S3 bucket, or something like s3sync.
http://www.witti.ws/blog/2013/12/03/transitioning-s3sync-aws-cli/