Privacy of backups in Google Cloud Storage - google-cloud-platform

I'm setting up a Coldline bucket for unstructured data backup.
The bucket level public access setting for my Coldline storage bucket is set at "Per Object" and the object level public access setting is at "Not Public".
But whenever I generate an access link to my private storage objects, I'm able to use the generated access links without any credentials (say incognito).
Does this mean if someone is able to generate (highly unlikely) or able to snoop my GET requests (highly likely) they get view access to my private stored objects?

I think you are referring to the usage of Signed URLs that can be implemented to give time-limited read or write access for GCP buckets and objects. Keep in mind that this method will give access to anyone in possession of the URL, regardless of whether they have a Google account, as you well mentioned.
In case you want to implement a user authenticated methods, it is recommended to use IAM and ACLs permissions. You can take a look on the Access Control Options document to know more about the available alternatives to control who has access to your Cloud Storage.

Related

GCP Bucket - create link for upload

I've looked all over but can't seem to see how this might be done.
I have a GCP bucket, which is publicly accessible. I need a link to give to an associate which they can use to upload some files into that bucket. There is no need for authentication, the files are just public domain anyway. I need the process to be super simple for the associate.
Once the files are uploaded I can grab them and destroy the bucket/project anyway.
Is this possible?
Use a signed URL. From the documentation (emphasis mine):
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. You specify an expiration time when you create the signed URL. Anyone who knows the URL can access the resource until the expiration time for the URL is reached or the key used to sign the URL is rotated.

GCS bucket should be accessible from only specific people

I have created one GCS bucket in my GCP project where i will save some html files. Users will access that html files from browser using object URL.
I want that html url to be accessible from only specific people who belong to my organization. Even if someone from outside hits that URL, data should not be accessible to them.
Whats the way to do that?
In (1) you can find the two available methods for controlling the access to the objects inside your Google Cloud Storage buckets:
-Uniform: You will provide access to different users depending on the IAM roles you grant to them (2). All of the objects inside the same bucket will share the same policy (you can also define a group of objects with the same prefix instead of the whole bucket)
-Fine-grained: Apart from the IAM roles, you can also use Access Control Lists (3) for defining special policies for each object. This way, a user can have access to only some of the objects inside your bucket.
Once you have defined the appropriate policies according to the desired permissions to be granted to each user, you will need to share with them the Authenticated URL of the object and, according to the GCS UI, "Only users granted permission can access the object with this link".
On the contrary, If you would like to make an object publicly available for everyone in the Internet, you can follow (4) in order to create a Public URL for a certain object. This is also depicted in (5):
Most of the operations you perform in Cloud Storage must be
authenticated. The only exceptions are operations on objects that
allow anonymous access. Objects are anonymously accessible if the
allUsers group has READ permission.

Using Google Cloud Platform Storage to store user images

I was trying to understand the Google Cloud Platform storage but couldn't really comprehend the language used in the documentation. I wanted to ask if you could use the storage and the APIs to store photos users take within your application and also get the images back if provided with a URL? and even if you can, would it be a safe and reasonable method to do so?
Yes you can pretty much use a storage bucket to store any kind of data.
In terms of transferring images from an application to storage buckets, the application must be authorised to write to the bucket.
One option is to use a service account key within the application. A service account is a special account that can be used by an application to authorise to various Google APIs, including the storage API.
There is some more information about service accounts here and information here about using service account keys. These keys can be used within your application, and allow the application to inherit the permission/scopes assigned to that service account.
In terms of retrieving images using a URL, one possible option would be to use signed URLs which would allow you to give users read or write access to an object (in your case images) in a bucket for a given amount of time.
Access to bucket objects can also be controlled with ACL (Access Control Lists). If you're happy for you images to be available publicly (i.e. accessible to everybody), it's possible to set an ACL with 'Reader' access for AllUsers.
More information on this can be found here.
Should you decide to make the images available publically, the URL format to retrive the object/image from the bucket would be:
https://storage.googleapis.com/[BUCKET_NAME]/[OBJECT_NAME]
EDIT:
In relation to using an interface to upload the files before the files land in the bucket, one option would be to have a instance with an external IP address (or multiple instances behind a Load Balancer) where the images are initially uploaded. You could mount Cloud Storage to this instance using FUSE, so that uploaded files are easily transferred to the bucket. In terms of databases you have the option of manually installing your database on a Compute Engine instance, or using a fully managed database service such as Cloud SQL.

How to access a single URL to allUsers that can fetch the random object in the bucket?(Google cloud storage)

I created an bucket on Google cloud storage and set the permission "Read access to GCS objects" to allUsers.
So, I can use storage.googleapis.com/bucket-name/object-name to let anyone to read the object.
However, if I wanna assign a random object of this bucket to every read request by just 1 single URL, is that available? (e.g. storage.googleapis.com/bucket-name/random)
I know that the permission is assign to the bucket not the object, so seems like I can do something to grab a random object?
or what service I should use to solve the problem?
Objects have acls too, and you can grant AllUsers READ to an object. That being said, it can be a pain to manage per object acls, so I would recommend using a second bucket instead.
Depending on your use case you might be more interested in Signed URLs. You can use signed urls to grant temporary access to a user instead of permanent access to everyone.

Restrict Access to S3 bucket on AWS

I am storing files in a S3 bucket. I want the access to the files be restricted.
Currently, anyone with the URL to the file is able to access the file.
I want a behavior where file is accessed only when it is accessed through my application. The application is hosted on EC2.
Following are 2 possible ways I could find.
Use "referer" key in bucket policy.
Change "allowed origin" in CORS configuration
Which of the above 2 should be used, given the fact that 'referer' could be spoofed in the request header.
Also can cloudfront play a role over here?
I would recommend using a Pre-Signed URL that permits access to private objects stored on Amazon S3. It is a means of keeping objects secure, yet grant temporary access to a specific object.
It is created via a hash calculation based on the object path, expiry time and a shared Secret Access Key belonging to an account that has permission to access the Amazon S3 object. The result is a time-limited URL that grants access to the object. Once the expiry time passes, the URL does not return the object.
Start by removing existing permissions that grant access to these objects. Then generate Pre-Signed URLs to grant access to private content on a per-object basis, calculated every time you reference an S3 object. (Don't worry, it's fast to do!)
See documentation: Sample code in Java
When dealing with a private S3 bucket, you'll want to use an AWS SDK appropriate for your use case.
Here lies SDKs for many different languages: http://aws.amazon.com/tools/
Within each SDK, you can find sample calls to S3.
If you are trying to make private calls via browser-side JavaScript, you can use CORS.