I am having a problem with uploading hidden files to s3 and getting them back.
Basically, I upload the hidden files to s3 and download them back. The files become visible (hidden checkbox is unchecked).
Btw, this is Windows.
Is there anything I am missing out or this is the way it's supposed to work?
Thanks
The hidden flag is a Windows file system thing. S3 isn't a Windows filesystem, so when you upload a file to S3, it can't retain that Windows-specific flag. I think you would have to write some custom code to set extra, custom metadata on the S3 objects if they were marked as hidden in Windows, and custom download code to check for that metadata and mark the files as hidden if they are being downloaded onto a Windows machine.
There might be some specific S3 applications for Windows that would manage this. What tool are you using right now to upload to S3?
Related
There are many compressed files inside google cloud storage.
I want to unzip the zipped file, rename it and save it back to another bucket.
I've seen a lot of posts, but I couldn't find a way other than how to download it with gsutil and handle it.
Do you have any other way?
To modify a file, such as unzipping, you must read, modify and then write. This means download, unzip and upload the extracted files.
Use gsutil or another tool, one of the SDKs, or the REST APIs. To unzip a file, use a zip tool or one of the libraries that support zip operations.
You can start using cloudFunctions for it. CloudFunction will get triggered when the object is created/finalized and it will do the job automatically.
You can also use the same function to iterate over all the zip files, do the tasks and move same files to another bucket. One thing to make sure it may not be able to move all files in a single go, but for the files which already exists on the bucket, you can use local machine to run the program.
Here is the list of libraries to connect to CloudStorage
Currently we were using this extension (https://marketplace.visualstudio.com/items?itemName=GlobalFreightSolutionsLtd.copy-files-to-google-buckets) to copy files from Azure to google cloud bucket. After almost one year of use everything was perfect, until we got error:
UnhandledPromiseRejectionWarning: ResumableUploadError: A resumable upload could not be performed. The directory, C:\Users\VssAdministrator.config, is not writable. You may try another upload, this time setting options.resumable to false.
Maybe someone had similar problem and can help to solve it. Or we should contact product owner to solve the issue? Any other options/suggestions for uploading files are also acceptable. Thanks.
Judging by the error message its coming from the Node JS client library used to connect to GCP buckets, the extension is apparently built with this GCP client library, the suggestion mentioned on the GitHub would need to be implemented by the Developer of the extension, concretely it looks that it may be using the createWriteStream method which according to the doc:
Resumable uploads require write access to the $HOME directory. Through
config-store, some metadata is stored. By default, if the directory is
not writable, we will fall back to a simple upload. However, if you
explicitly request a resumable upload, and we cannot write to the
config directory, we will return a ResumableUploadError
I would suggest maybe contacting the extension publisher, or trying to disable resumable uploads within the extension options (if any)
I have over 30 Leaflet maps hosted on my Google Cloud Platform bucket (for example) and it has always been an easy process to upload my folder (which includes an html file with sub-folders including .js and .css files) and share the map publicly.
I tried uploading another map today, but within the folder there are no files showing and I get the following message "There are no live objects in this folder. If you have object versioning enabled, this folder may contain archived versions of objects, which aren't visible in the console. You can list archived object versions using gsutil or the APIs."
Does anyone know what is going on here?
We have also seen this problem, and it seems that the issue is limited to buckets that have spaces in the name.
It's also not reproducible through the gcloud web console, but if you use gsutil to upload a file to a bucket with a space in the name then it won't be visible on the web UI.
I can see from your screenshot that your bucket also has spaces (%20 in the url).
If you need a workaround asap, you could rename your bucket...
But google should fix this soon, I hope.
There is currently open issue on GCS/Console integration
If files have any symbols that needs urlencoding - they are not visible in console - but accessible via gsutil/API (which is currently recommended as workaround)
Issue has been resolved as of 8-May-2018 10:00 UTC
This can happen if the file doesn't have an extension, the UI treats it as a folder and lets you navigate into it, showing a blank folder instead of the file contents.
We had the same symptom (files show up in API but invisible on the web and via CLI).
The issue turned out to be that we were saving files to "./uploads", which Google interprets as "create a directory literally called '.' and then a subdirectory called uploads."
The fix was to upload to "uploads/" instead of "./uploads". We also just ran a mass copy operation via the API for everything under "./uploads". All visible now!
I also had spaces in my url and it was not working properly yesterday. Checked this morning and everything is working as expected. I still have the spaces in my URL btw.
So I have a use case where I need to put files from on-prem FTP to S3.
The size of each file (XML) is 5KB max.
The no of files is 100 files per minutes.
No, the use case is such that as soon as files come at FTP location I need to put into S3 bucket immediately.
What could be the best way to achieve that.
Here are my option
Using AWS CLI at my FTP location.(push mechanism )
Using lambda (pull mechanism.
Writing java application to put the file into S3 from FTP.
Or is there anything built in that I can leverage in.
Basically, i need to put the file in S3 as soon as possible because UI is built on top of S3 and if the file does not arrive immediately I might be in trouble.
The easiest would be to use the AWS Command-Line Interface (CLI), or an API call if you wish to do it from application code.
It doesn't really make sense doing it via Lambda, because Lambda would need to somehow retrieve the file from FTP and then copy it to S3 (so it is doing double work).
You can certainly write a Java application to do it, or simply call the AWS CLI (written in Python) since it will work out-of-the-box.
You could either use aws s3 sync to copy all new/updated files, or copy specific files with aws s3 cp. If you have so many files, it's probably best to specify the files otherwise it will waste time scanning many historical files that don't need to be copied.
The ultimate best case would be for the files to be sent to S3 directly, without involving FTP at all!
I want to view files Such as excel or zip or any other files in the browser without getting downloaded.
I am able to display image and pdf files in the browser but unable to view any other format's such as zip or xls.
I am storing my files in S3.
What should i do?
Web browsers are not able to natively display most file types. They can render HTML and can display certain types of images (eg JPG, PNG), but only after these files are actually downloaded to your computer.
The same goes for PDFs -- they are downloaded, then a browser plug-in renders the content.
When viewing file (eg Excel spreadsheets and PDF files) within services like Gmail and Google Drive, the files are typically converted into images on the server-end and those images are sent to your computer. Amazon S3 is purely a storage service and does not offer a conversion service like this.
Zip files are a method of compressing files and also storing multiple files within a single archive file. Some web services might offer the ability to list files within a Zip, but again Amazon S3 is purely a storage service and does not offer this capability.
To answer your "What should I do?" question, some options are:
Download the files to your computer to view them, or
Use a storage service that offers these capabilities (many of which store the actual files in Amazon S3, but add additional services to convert the files for viewing online)
I might be a bit too late but did you try Filestash? (I made it)
That's what it looks like when you open a xls document on S3:
I aim to support all the common formats and the already supported list is rather big already