Google Cloud Storage - files not showing - google-cloud-platform

I have over 30 Leaflet maps hosted on my Google Cloud Platform bucket (for example) and it has always been an easy process to upload my folder (which includes an html file with sub-folders including .js and .css files) and share the map publicly.
I tried uploading another map today, but within the folder there are no files showing and I get the following message "There are no live objects in this folder. If you have object versioning enabled, this folder may contain archived versions of objects, which aren't visible in the console. You can list archived object versions using gsutil or the APIs."
Does anyone know what is going on here?

We have also seen this problem, and it seems that the issue is limited to buckets that have spaces in the name.
It's also not reproducible through the gcloud web console, but if you use gsutil to upload a file to a bucket with a space in the name then it won't be visible on the web UI.
I can see from your screenshot that your bucket also has spaces (%20 in the url).
If you need a workaround asap, you could rename your bucket...
But google should fix this soon, I hope.

There is currently open issue on GCS/Console integration
If files have any symbols that needs urlencoding - they are not visible in console - but accessible via gsutil/API (which is currently recommended as workaround)
Issue has been resolved as of 8-May-2018 10:00 UTC

This can happen if the file doesn't have an extension, the UI treats it as a folder and lets you navigate into it, showing a blank folder instead of the file contents.

We had the same symptom (files show up in API but invisible on the web and via CLI).
The issue turned out to be that we were saving files to "./uploads", which Google interprets as "create a directory literally called '.' and then a subdirectory called uploads."
The fix was to upload to "uploads/" instead of "./uploads". We also just ran a mass copy operation via the API for everything under "./uploads". All visible now!

I also had spaces in my url and it was not working properly yesterday. Checked this morning and everything is working as expected. I still have the spaces in my URL btw.

Related

Hiding file source in an S3 bucket

I'm running an S3 bucket with a Cloudfront distribution. Everything works except the ability to read the source code is still there.
So the bucket is at mybucket.domain.com and that works okay. However, navigating to mybucket.domain.com/script.js or mybucket.domain.com/style.css will reveal the contents of each file.
I have searched far and wide for a solution but seem to be coming up blank every time. I've tried things with the bucket policy and Cloudfront settings to no avail. Any thoughts are appreciated. Thanks.
There's no way to prevent this. The web browser has to be able to download those files to the local computer in order to render your website. In order for the web browser to download those files they have to be publicly available. There's no way to stop someone from viewing the source of files that are publicly available. Since there are copies of these files on every computer that has visited your website, there is absolutely no way to keep people from viewing the source of those files.
You shouldn't place anything in those files that shouldn't be publicly available.

Lost ability to edit code in AWS Lambda console

I have several Lambdas deployed to AWS, all created as single file function in the console. All was working fine until I flushed my caches and cookies in chrome. Then the function codes will no longer show up in the browser, any browser, I tried 3. Also all the Lambda functions think they are all zip file based so I cannot reenter the code from my git repo. The functions still operate properly, I just cannot edit them.
All new functions I create are also not in console editing mode. Something general / global has changed, not specific to any one function.
What can cause this? And across all browsers?
Most importantly how can I fix this?
You can download your code as a zip file if you click right on Actions > Export Function and then Download deployment package. Maybe re-uploading the packages will fix your issue.

Azure DevOps Copying files to Google cloud failed

Currently we were using this extension (https://marketplace.visualstudio.com/items?itemName=GlobalFreightSolutionsLtd.copy-files-to-google-buckets) to copy files from Azure to google cloud bucket. After almost one year of use everything was perfect, until we got error:
UnhandledPromiseRejectionWarning: ResumableUploadError: A resumable upload could not be performed. The directory, C:\Users\VssAdministrator.config, is not writable. You may try another upload, this time setting options.resumable to false.
Maybe someone had similar problem and can help to solve it. Or we should contact product owner to solve the issue? Any other options/suggestions for uploading files are also acceptable. Thanks.
Judging by the error message its coming from the Node JS client library used to connect to GCP buckets, the extension is apparently built with this GCP client library, the suggestion mentioned on the GitHub would need to be implemented by the Developer of the extension, concretely it looks that it may be using the createWriteStream method which according to the doc:
Resumable uploads require write access to the $HOME directory. Through
config-store, some metadata is stored. By default, if the directory is
not writable, we will fall back to a simple upload. However, if you
explicitly request a resumable upload, and we cannot write to the
config directory, we will return a ResumableUploadError
I would suggest maybe contacting the extension publisher, or trying to disable resumable uploads within the extension options (if any)

Best way to public download a full folder with amazon?

I'm doing a launcher (in C#) that downloads a full game or app. The app can be very large (i.e. 5GB) and I need to get it with the correct folder hierarchhy, so the same launcher can check if the user has the correct app or it needs to be repaired or updated.
I'm trying to do that with amazon s3 and CloudFront, but seems that I can only get objects and not the full folder of the app.
I also have stored the folder in an EC2, and that works fine, but seems that EC2 is not designed for that, so downloads are extremely slow.
Is there any amazon service to do that?
Have you considered zipping the files first? It solves alot of issues eg folder structure, compression and works great from s3 and cloud front. Its a common solution for this use case.
You can do this in your application with the DownlodDirectory method in TransferUtility class in the .NET SDK.
You can read more about the DownloadDirectory method here. By default I believe it only downloads objects in the root path, so don’t forget to do it recursively for sub-folders if necessary.

AWS S3 Buckets Access Errors & Buckets Showing Old Files

I have S3 buckets that I have been using for years and today when I logged in through the console to manually upload some files, I have noticed that all of my buckets are showing ERROR under the access tab.
While I can still see the files, I'm unable to upload or modify any files and also all files in my buckets are showing old versions from December even though I have updated some of the text files just this month. Also, all files are missing their meta tags.
I did not manage or change any permissions in my account in years and I'm the only one with access to these files.
Anyone else had this issue? How can I fix this?
It really feels like AWS had some major failure and replaced my current files with some old backup.
I had the same issue (except of the old files part). In my case it was a browser plugin called "Avira Browserschutz", a similar plugin to adblock, which caused it. Other plugins such as uBlock Origin might result in identical behavior.
Test it by disabling said plugins or visit AWS in incognito mode.