gcp firewall settings for individual storage browser - google-cloud-platform

I want to create firewall rules particular to a storage browser in Google Cloud platform. I see that we have an option to create firewall rules but, How can we have that rules to specific storage browser and not to all other storage browser buckets?

You do not have to create firewall rules to buckets. What you need is to set the permisions on the buckets Using Cloud IAM with buckets.
Open the Cloud Storage browser in the Google Cloud Platform Console.
Click the drop-down menu associated with the bucket to which you want
to grant a member a role.
The drop-down menu appears as three vertical dots to the far right of
the bucket's row.
Choose Edit bucket permissions.
In the Add members field, enter one or more identities that need
access to your bucket.
Add member dialog.
Select a role (or roles) from the Select a role drop-down menu. The
roles you select appear in the pane with a short description of the
permissions they grant.
Click Add.
You can add as members individual users, groups, domains, or even the public as a whole. Members are assigned roles, which grant members the ability to perform actions in Cloud Storage as well as GCP more generally.
You can make a Cloud Storage bucket accessible only by a certain service account link.
A service account is a special type of Google account intended to
represent a non-human user that needs to authenticate and be
authorized to access data in Google APIs link.
You can not apply firewall rules to single buckets.
Firewall rules are defined at the network level, and only apply to the
network where they are created.

Your inquiry is a known Feature Request that has not been implemented yet on Cloud Storage. It has been requested and ongoing, in order to allow IP Whitelisting in Bucket Policy, just like AWS does it with S3 buckets. You can “star” the FR, so that it gets more visibility and also add your email to the “CC” list so that you can get the updates.
As a workaround, you may request access to use VPC Service Controls. According to official documentation, with VPC Service Controls, administrators can define a security perimeter around resources of Google-managed services to control communication to and between those services.
Cloud Storage is included in the Supported products of these Google-managed services and here you can find its limitations.
You can use access levels to grant controlled access to protected Google Cloud Platform (GCP) resources in service perimeters from outside a perimeter.
Access levels define various attributes that are used to filter requests made to certain resources. Access levels can consider various criteria, such as IP address and user identity. Additionally, they are created and managed using Access Context Manager.
This example describes how to create an access level condition that allows access only from a specified range of IP addresses.
However, it needs to be considered that VPC Service controls create a “borders” around the project specifying a “virtual area”, where Access Context Manager rules can be applied. The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project, which is not the expected result. As stated here, you cannot apply an IP address rule to an object, only to all objects in a project.
Furthermore, here you can find a useful link for the Best Practices concerning Security and Access Control on Cloud Storage buckets. Here, you can find tips on “sharing your files” while hosting a static website.
In conclusion, another option is Firebase Hosting instead of Cloud Storage, as stated here. Firebase Hosting is a Google hosting service which provides static web content to the user in a secure, fast, free and easy way.

Related

Google Cloud Storage Signed URL for Entire Bucket?

I have user owned objects in a Google Cloud Storage bucket which I'm controlling access to through a webapp backend. Currently, the webapp backend authenticates the user and then generates signed read URLs for the object. This works great, but can result in high volume of URLs being generated in response to a bulk action. The failure rate of these signed URLs is very low, but when enough of them are generated some fail and a timeout or connection reset is noticeable to users.
Is there any way to give this kind of controlled, time limited access to users at the bucket level, or in bulk in another way, without creating GCP accounts for users?
You are correct, all these the methods require a service account. After further investigation, there is no way to provide access without a GCP account.
At the bucket level, there is uniform bucket-level access, Identity and Access Management (IAM) and Access Control List (ACL). If you want to avoid creating GCP accounts for the users, then try Access Control List (ACL).
In this access control you can also determine who the reader, writer and owners will be. But this access control lets you use grant access to anyone who has external email addresses. This will save you the time of creating GCP accounts for the users, here are the scope of who can grant access:
And here it's what each scope covers:
Google account email address:
Every user who has a Google account must have a unique email address associated with that account. You can specify a scope by using any email address that is associated with a Google account, such as a gmail.com address.
Cloud Storage remembers email addresses as they are provided in ACLs until the entries are removed or replaced. If a user changes email addresses, you should update ACL entries to reflect these changes.
Google group email address:
Every Google group has a unique email address that is associated with the group. For example, the Cloud Storage Announce group has the following email address: gs-announce#googlegroups.com. You can find the email address that is associated with a Google group by clicking About on the homepage of every Google group.
Like Google account email addresses, Cloud Storage remembers group email addresses as they are provided in ACLs until the entries are removed. You do not need to worry about updating Google Group email addresses, because Google Group email addresses are permanent and unlikely to change.
Convenience values for projects:
Convenience values allow you to grant bulk access to your project's viewers, editors, and owners. Convenience values combine a project role and an associated project number. For example, in project 867489160491, editors are identified as editors-867489160491. You can find your project number on the homepage of the Google Cloud Console.
You should generally avoid using convenience values in production environments, because they require granting basic roles, a practice which is discouraged in production environments.
G Suite or Cloud Identity:
G Suite and Cloud Identity customers can associate their email accounts with an Internet domain name. When you do this, each email account takes the form USERNAME#YOUR_DOMAIN.com. You can specify a scope by using any Internet domain name that is associated with G Suite or Cloud Identity.
Special identifier for all Google account holders:
This special scope identifier represents anyone who is authenticated with a Google account. The special scope identifier for all Google account holders is allAuthenticatedUsers. Note that while this identifier is a User entity type, when using the Cloud Console it's labeled as a Public entity type.
Special identifier for all users:
This special scope identifier represents anyone who is on the Internet, with or without a Google account. The special scope identifier for all users is allUsers. Note that while this identifier is a User entity type, when using the Cloud Console it's labeled as a Public entity type.
You have full control of the access you want the users to have. You can learn about the access and what each does with the following link 1, Link 2.

Google Cloud Platform admin panel IP restriction

Is it possible to restrict access to cloud.google.com to specific IPs?
When I create a principal I'm giving it a specific role, but I would like to give access for that user/s only if it log in from specific IP.
[EDIT] To clarify, access should be restricted to the whole project. F.e. I limit access to only IP1. User "A" logs in to cloud.google.com, chooses project and if he logged from IP2, he won't see anything ("you don't have access .." message, same as the role based restrictions if you go when you shouldn't).
If he connects from IP2 he should have access to everything he's role gives him.
Only limits I can find in documentation (also the IAP pointed by Arden) are restrictions TO something (app, resource, etc.) not FROM something.
So the question is, is it even possible to do something like that.
You need implement Identity-Aware Proxy (IAP) : Authenticate users with Google Accounts
When to use IAP
Use IAP when you want to enforce access control policies for applications and resources. IAP works with signed headers or the App Engine standard environment Users API to secure your app. With IAP, you can set up group-based application access: a resource could be accessible for employees and inaccessible for contractors, or only accessible to a specific department.

GCP default service accounts best security practices

So, we have a "Compute Engine default service account", and everything is clear with it:
it's a legacy account with excessive permission
it used to be limited by "scope" assigned to each GCE instance or instances group
it's recommended to delete this account and use custom service account for each service with the least privilege principle.
The second "default service account" mentioned in the docs is the "App Engine default service account". Presumably it's assigned to the App Engine instances and it's also a legacy thing that needs to be treated similarly to the Compute Engine default service account. Right?
And what about "Google APIs Service Agent"? It has the "Editor" role. As far as I understand, this account is used internally by GCP and is not accessed by any custom resources I create as a user. Does it mean that there is no reason to reduce its permissions for the sake of complying with the best security practices?
You don't have to delete your default service account however at some point it's best to create accounts that have minimum permissions required for the job and refine the permissions to suit your needs instead of using default ones.
You have full control over this account so you can change it's permissions at any moment or even delete it:
Google creates the Compute Engine default service account and adds it to your project automatically but you have full control over the account.
The Compute Engine default service account is created with the IAM basic Editor role, but you can modify your service account's roles to control the service account's access to Google APIs.
You can disable or delete this service account from your project, but doing so might cause any applications that depend on the service account's credentials to fail
If something stops working you can recover the account up to 90 days.
It's also advisable not to use service accounts during development at all since this may pose security risk in the future.
Google APIs Service Agent which
This service account is designed specifically to run internal Google processes on your behalf. The account is owned by Google and is not listed in the Service Accounts section of Cloud Console
Addtiionally:
Certain resources rely on this service account and the default editor permissions granted to the service account. For example, managed instance groups and autoscaling uses the credentials of this account to create, delete, and manage instances. If you revoke permissions to the service account, or modify the permissions in such a way that it does not grant permissions to create instances, this will cause managed instance groups and autoscaling to stop working.
For these reasons, you should not modify this service account's roles unless a role recommendation explicitly suggests that you modify them.
Having said that we can conclude that remooving either default service account or Google APIs Service Agent is risky and requires a lot of preparation (especially that latter one).
Have a look at the best practices documentation describing what's recommended and what not when managing service accounts.
Also you can have a look at securing them against any expoitation and changing the service account and access scope for an instances.
When you talk about security, you especially talk about risk. So, what are the risks with the default service account.
If you use them on GCE or Cloud Run (the Compute Engine default service account) you have over permissions. If your environment is secured, the risk is low (especially on Cloud Run). On GCE the risk is higher because you have to keep up to date the VM and to control the firewall rules to access to your VM.
Note: by default, Google Cloud create a VPC with firewall rules open to 0.0.0.0/0 on port 22, RDP and ICMP. It's also a security issue to fix by default.
The App Engine default service account is used by App Engine and Cloud Functions by default. Same as Cloud Run, the risk can be considered as low.
Another important aspect is the capacity to generate service account key files on those default services accounts. Service account key file are simple JSON file with a private key in it. This time the risk is very high because a few developers take REALLY care of the security of that file.
Note: In a previous company, the only security issues that we had came from those files, especially with service account with the editor role
Most of the time, the user doesn't need a service account key file to develop (I wrote a bunch of articles on that on Medium)
There is 2 ways to mitigate those risks.
Perform IaC (Infra as code, with product like teraform) to create and deploy your projects and to enforce all the best security practices that you have defined in your company (VPC without default firewall rules, no editor role on service accounts,...)
Use organisation policies, especially this one "Disable service account key creation" to prevent the service account key creation, and this one "Disable Automatic IAM Grants for Default Service Accounts" to prevent the editor role on the default service accounts.
The deletion isn't a solution, but a good knowledge of the risk, a good security culture in the team and some organisation policies are the key.

How to restrict users to single VPC in Google cloud platform?

If I have 2 VPCs set up for 2 different teams on a single project in GCP and want to give the IAM users the access to one single VPC and the resources in that VPC only, how to I do that in Google cloud platform? what IAM roles has to be assigned to these users?
You can't achieve this easily and out of the box. The VPC is a resource, you can restrict access on this resource. VM (on this VPC) are also resources, and the permissions provided on the VPC aren't inherited to the resource that use this VPC.
You can to use a new feature, named asset relationship that provide you the relation between the assets. Like that you could get the asset (resources) in relation with your VPC and enforce the same restriction on all these resources. But you need to code this, it's not out of the box, and the feature still in preview.

How do I restrict a Google service account?

If I create a service account in my project and give it to a third party, can that third party abuse it to create VM instances etc? Or is it only allowed to do things that I give it explicit permission to do?
In the "permissions" section of the Google developers console I can set the service account to "Can edit" or "Can view", but what do those mean?
If you give "edit" or "owner" permissions, the user can create, modify, or delete GCE VM instances (among other resources). If you only give "view" permissions, then they can't create, modify, or delete GCE VM instances.
However, you cannot give fine-grained permissions such as "user can only edit this VM instance, but not this other one".
Per Google Compute Engine docs:
Can View
Provides READ access:
Can see the state of your instances.
Can list and get any resource type.
Can Edit
Provides "Can View" access, plus:
Can modify instances.
On standard images released after March 22, 2012, can ssh into the
project's instances.
Is Owner
Provides "Can Edit" access, plus:
Can change membership of the project.
Per Google Cloud Storage docs:
Project team members are given the following permissions based on
their roles:
All Project Team Members
All project team members can list buckets
within a project.
Project Editors
All project editors can list, create, and delete buckets.
Project Owners
All project owners can list, create, and delete buckets, and can also perform administrative tasks like adding and removing team members and changing billing. The project owners group is the owner of all buckets within a project, regardless of who may be the original bucket creator.
When you create a bucket without specifying an ACL, the project-private ACL is applied to the bucket automatically. This ACL provides additional permissions to team members, as described in default bucket ACLs.
Per Google Cloud SQL docs:
Team members may be authorized to have one of three levels of access:
“can View” (called Viewer in App Engine Console) allows read-only
access.
“can Edit” (called Developer in App Engine Console) allows
modify and delete access.
This allows a developer to deploy the
application and modify or configure its resources.
“is Owner” (called
Owner in App Engine Console) allows full administrative access.
This
includes the ability to add members and set the authorization level of
team members.