Google cloud load balancer dynamically adding backend buckets - google-cloud-platform

Goal:
For example, users could create courses which has resources such as images, videos etc.
I want to restrict access to them using signed cookies. i.e. resources on /courses/1 will only be accessible to logged-in users who have a valid signed cookie.
Background
I'll be creating a bucket of media files per course based on https://cloud.google.com/storage/docs/access-control#recommended_bucket_architecture.
Where I am stuck
How to add backend buckets to the load balancer dynamically since I could only add them in the console
How to use the same signing key for all buckets for easy maintenance https://cloud.google.com/cdn/docs/using-signed-cookies#creatingkeys. It seems like I need to manually create a key for each bucket.
So is there a standard way to do these or am I thinking about this whole architecture wrong since this won't scale without automation?

You will be limited to 50 path rules as mentioned in the Quotas, limited to 50 courses. I hope you expect more than this!!
So, this pattern isn't suitable for your use case. You need to use the same bucket and to control access with a backend app. And then to generated SignedUrl for the resources requested by the users

Related

Google Cloud Storage restrict download usage

Is there a way to restrict the GCP Storage bucket to a specific domain, Android/iOS app etc. so that only those entities be allowed to use this particular bucket's resources?
If the user aren't always authenticated, there isn't strong security, only small thing to increase the difficulty to pass through...
I recommend you to serve your assets behind a HTTPS load balancer with the bucket as backend (like a static website).
The main reason is the capacity to use Cloud Armor and to customize the policy to catch and check one of the request attribute. I think you can achieve something with the request header, either with a custom header that you set in your application, or to reuse the Application specific headers (I'm not a mobile developer, but I know that Android has. I'm sure Ios also).
It's not very strong, but it let you the capacity to test easily and to reduce the capacity of anyone to get the content.

Google Cloud Storage anti-theft chain?

I uploaded an image to Google Cloud Storage, but someone used the URL of the image for their website without my consent, which caused me to lose money. How can this be prohibited?
As #Doug and gauillaume# said in the comments, by default, a storage URL is accessible for anyone connected to internet who has the URL.
If you want only entitled persons to have access to you Storage objects you can use Signed URLs.
A signed URL is a URL that provides limited permission and time to make a request. Signed URLs contain authentication information in their query string, allowing users without credentials to perform specific actions on a resource.
Here you can read more about how to process of generating signed URLs looks like and how to achieve it.
There are also other security considerations for Storage buckets that may be worth reading about. For example if you do not need a 100% secure bucket you can simply choose some very difficult name for buckets and objects. That way it will be very difficult for a link to be guessed by a person which is not entitled. More strong solutions would be ACLs and different IAM roles attributions.

Using Google Cloud Platform Storage to store user images

I was trying to understand the Google Cloud Platform storage but couldn't really comprehend the language used in the documentation. I wanted to ask if you could use the storage and the APIs to store photos users take within your application and also get the images back if provided with a URL? and even if you can, would it be a safe and reasonable method to do so?
Yes you can pretty much use a storage bucket to store any kind of data.
In terms of transferring images from an application to storage buckets, the application must be authorised to write to the bucket.
One option is to use a service account key within the application. A service account is a special account that can be used by an application to authorise to various Google APIs, including the storage API.
There is some more information about service accounts here and information here about using service account keys. These keys can be used within your application, and allow the application to inherit the permission/scopes assigned to that service account.
In terms of retrieving images using a URL, one possible option would be to use signed URLs which would allow you to give users read or write access to an object (in your case images) in a bucket for a given amount of time.
Access to bucket objects can also be controlled with ACL (Access Control Lists). If you're happy for you images to be available publicly (i.e. accessible to everybody), it's possible to set an ACL with 'Reader' access for AllUsers.
More information on this can be found here.
Should you decide to make the images available publically, the URL format to retrive the object/image from the bucket would be:
https://storage.googleapis.com/[BUCKET_NAME]/[OBJECT_NAME]
EDIT:
In relation to using an interface to upload the files before the files land in the bucket, one option would be to have a instance with an external IP address (or multiple instances behind a Load Balancer) where the images are initially uploaded. You could mount Cloud Storage to this instance using FUSE, so that uploaded files are easily transferred to the bucket. In terms of databases you have the option of manually installing your database on a Compute Engine instance, or using a fully managed database service such as Cloud SQL.

How to store files in s3 that are only available to particular groups of web app users

I have an application where users are part of a 'group' of users. Each group can 'upload' documents to the application. Behind the scenes I am using S3 to store these documents.
I've spent a ton of time reading the AWS documentation but still don't understand the simplest/correct way to do the following:
User 1 in group A can upload documents to application
User 2 in group A can see and access all group A documents in application
User 3 in group B can upload documents to application
User 3 in group B cannot see any documents that belong to group A (and vice-versa)
Should I be using the API to create a new bucket for each 'group'?
Or can all of this be done in a single bucket with subdirectories for each group & then set access limitations?
Should I be setting up an IAM group policy and applying it to each web app user?
I'm not sure of the best architecture for this scenario so would really appreciate a point in the right direction.
AWS credentials should be assigned to your application and to your IT staff who need to maintain the application.
Users of your application should not be given AWS credentials.
Users should interact directly with your application and your application will make calls to the AWS API from the back-end. This way, your application has full control of what data they can see and what operations they can perform.
Think of it like a database -- you never want to give users direct access to a database. Instead, they should always interact via an application, which will store and update information in a database.
There are some common exceptions to the above:
If you want users to access/download a file stored in S3, your application can generate a pre-signed URL, which is a time-limited URL that permits access to an Amazon S3 object. Your application is responsible for generating the URL when it wants to grant access and the URl can be included in an HTML page (eg show a private picture on a web page).
If you want to allow users to upload files directly to S3, you could again use a pre-signed URL or you could grant public Write access to an Amazon S3 bucket. Think of it like a modern FTP server.
Bottom line: Your application is in charge! Also, consider using pre-signed URLs to provide direct access to objects when the application permits it.

How to limit access in Amazon S3 files to specific people?

I work on a SaaS application where Creators can create Groups and invite others to their Group to share files, chat and so on. Only people within specific group should have access to this group's files.
People from other group must not have access to not their group's files.
And of course all files permission should be set to 'Private', i.e. they should not be searchable/visible/accessable by anonymous users of Internet since information in those files is for personal use only.
I am new to Amazon S3 and don't know how to achieve it... Should I create only 1 main bucket? Or create for each group a new Amazon Bucket?
It is not recommended to use AWS Identity and Access Management (IAM) for storing application users. Application users should be maintained in a separate database (or LDAP, Active directory, etc).
Therefore, creating "one bucket per group" is not feasible, since it is not possible to assign your applications users to permissions within Amazon S3.
The better method would be to manage permissions within your application. When a user requests access to a file, the application can determine whether they should be permitted access. If they are permitted, then the application can generate a Pre-Signed URL.
A Pre-Signed URL permits access to private objects stored on Amazon S3. It is a means of keeping objects secure, yet granting temporary access to a specific object.
When listing available files, your application would generate links that include the pre-signed URL. Then, when a user clicks the link, they can access the file. Then, after a certain time has expired (eg 10 minutes), the link will no longer function. So, if a user shares a link with somebody else, it will probably have timed-out.
See: Creating a pre-signed URL in Ruby