I created a storage bucket to put files in, and I put one video file. Then when I try to add any other file, it says the file already exists, but it doesn't. Couldn't find any solutions, or any threads about this. Anyone faced this issue?
Found the issue,
If you are logged in with multiple accounts, and even if you selected the correct account, sometimes GCS thinks you are using a different account and doesn't let you do anything.
Cleared the cookies and it worked. (Or if you don't want to, use a private tab to log in)
Related
I am beginner to GCP, I want to have two folders processed and unprocessed folder
in the cloud storage bucket. whenever a files comes to the google storage bucket from any source, after which cloud function will get triggered, if the files are successfully inserted into the target such as Bigquery, the file will go into the processed folder, if not into the unprocessed folder.
I want to know how can I get alerts when the files go into the unprocessed folder or error folder??
Do I have to write a code or Should I write a cloud function or anything else which gets me alerts??
Any help will be appreciated
Thank you
As you mentioned usage of Cloud Functions is the right approach.
A simple function is required, then it should be deployed with the proper trigger associated with a bucket.
More details, with examples can be found here:
https://cloud.google.com/functions/docs/calling/storage
I am trying to replace an object(video) in google cloud bucket after doing certain operations over it giving it the same name. Giving it same name because it's already shared to multiple users. While doing an operation over it and while replacing it, some chunks of video becomes temporarily unavailable for people who are playing that video at that time and they face issue for a few seconds because of this.
So I have a question that whether its possible to replace the object in-place without affecting the existing version loaded in some places. Also to add I have CDN above this bucket too. Can object versioning on this bucket help me here? I want to keep the name same so that I dont have to send this link again to everyone
I had a similar situation. When I called support, they had me name the new file EXACTLY the same as the original file. Delete the original file from your bucket. Upload the new file that has the exact same name, and the URL will be the same as the original URL.
A file is being placed into an S3 bucket each month. I need to download it after it arrives to S3. To do this I'm using the AWS TransferUtility.Download() function for this task.
The documentation doesn't indicate if the previous month's file will be overwritten or if the download will fail if a file (with the same name) is already there. Does anyone know how this function will behave?
Thanks
Mike
So I made a test program to see what the application would do. I found two things. First, to the my original question, I found the program will overwrite the existing file.
Next, before I could download a file, I needed to upload a file. Again, I used the TransferUtility, but the Upload function. This would NOT overwrite the existing file in S3.
So it was interesting to see the different behavior from the upload to the download functions.
Regards,
Mike
I've been setting up aws lambda functions for S3 events. I want to set up a new structure for my bucket, but it's not possible--so I set up a new bucket the way I want and will migrate old things and send new things there. I wanted to have some of the structure the same under a given base folder name old-bucket/images and new-bucket/images. I set up CloudFront to serve from old-bucket/images now, but I wanted to add new-bucket/images as well. I thought the behavior tab would set it such that it would check the new-bucket/images first then old-bucket/images. Alas, that didn't work. If the object wasn't found in the first, that was the end of the line.
Am I misunderstanding how behaviors work? Has anyone attempted anything like this?
That is expected behavior. An origin tells Amazon CloudFront where to obtain the data to serve to users, based upon a prefix, suffix, etc.
For example, you could serve old-bucket/* from one Amazon S3 bucket, while serving new-bucket/* from a different bucket.
However, there is no capability to 'fall-back' to a different origin if a file is not found.
You could check for the existence of files before serving the link, and then provide a different link depending upon where the files are stored. Otherwise, you'll need to put all of your files in the location that matches the link you are serving.
I have set up a public bucket in S3 and copied multiple objects into it. In this case they are jpeg photos.
I want to share all these objects with anonymous public users (friends), but I want to send them one static website address for the bucket and for the objects to show up as a list (or at least show all the images) when they click on that one address link.
Is this possible to display the objects this way using S3 to public users who don't have an S3 account?
The alternative I know of is to send them a unique link to each of the objects in the bucket (which would take forever!).
Any advice would be helpful.
S3 doesn't have anything built-in to do a "directory index" like nginx and Apache can do. It can be done with AWS Lambda, though.
I built a rudimentary image index with lambda, you might be able to adapt it to solve your problem.
yes.
you can host an static webpage inside a s3 bucket: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
just generate a static html page with links to all the photos, upload it in the bucket, set the bucket to serve as a static webpage and give the link to it.
Or, for the extra lazy :) https://github.com/rgrp/s3-bucket-listing
Thanks for your answers, they helped me to find a really simple solution. On a different forum I found someone has written some script and put it in a link that you just upload straight into your bucket and that puts all the objects into a simple list...... genius!
This is the link:
http://regexp.s3.amazonaws.com/list.html
So for the less techy people (like me) you literally upload that link above into your bucket. Even if you haven't downloaded it onto your PC, just copy and paste it into the upload file path.
When I uploaded it, the file appeared in the S3 bucket as list.html
Make sure the file is readable and you've set the ACL appropriately. And make sure your bucket has a policy that allows anyone to access it.
Your bucket objects(content) are then shown at the url link below.
http://<your bucket name>.s3.amazonaws.com/list.html
Where <your bucket name> is written above, replace that part with just the name of your bucket.
And you should be able to click on that link and see the list of objects in your bucket. Once you get your head around it, it is actually very simple.