Restrict a Load-Balanced Google backend bucket to a specific IP range - google-cloud-platform

I have a Google Storage bucket that I want to make accessible (anonymous, read-only) to a specific set of internet IP's (whitelist)
I can expose the bucket with a load balancer, but I have not been able to find a way to apply any firewall/IP rules to it.
A Cloud Armor policy can only be applied to backend services not backend buckets.
And the GCP firewall rules only apply to virtual instances.

There isn't any option to do this specific ask as of yet. GCS buckets are mainly controlled through ACLs. However, with Cloud Armor in Beta, this would be a perfect time for a feature request to include backend buckets as targets.

Related

Restrict access to a Google Cloud bucket to a certain IP range

I wanted to check if it's possible to restrict access to a Google Cloud bucket to a certain IP range and if yes how can we restrict the access. We are exposing our cloud bucket to a vendor and we want to restrict the bucket access only to the vendors IP range.
The best solution for this is VPC Service Controls. This allows you to define a service perimeter that includes your project's GCS service and add an access level to allow access from the IPs you want.

We have a query regarding GCP buckets, Can we put the GCS bucket behind the VPN, is it feasible in GCP?

We have a query regarding GCP buckets, Can we put the GCS bucket behind the
VPN, is it feasible in GCP?
Yes you can set a private Google Access configuration, and play with your DNS to use the VPN to reach the GCP APIs. Here the documentation

How do I host a static website on a gcs bucket inside a vpc?

AWS makes this possible with private link: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html
I want to do this with gcs
I have a static html site I want to host on a gcs bucket
BUT I want this to be hosted inside a vpc and use GCP VPC firewall rules to control access
Cloud Storage is hosted outside your VPC. You can't set firewall rules to access it.
However, to serve static files on internet, you can put your files on Cloud Storage, create a Global HTTPS load balancer and define your bucket as backend.
You can also serve your static file through App Engine and use the App Engine firewall feature to achieve something similar to your requirements.
I'm afraid that this is not currently a possibility. There is an ongoing Feature Request that you might find useful, as there are other customers trying to achieve your exact setup.
Access control in Google Cloud Storage is based on IAM permissions and ACLs, and they are not IP based in a way where you could make use of VPC Firewall Rules.
Nonetheless, I believe that the approach that currently will be most suitable to achieve the desired behavior will be to use VPC Service Controls, where you could define a service perimeter around storage.googleapis.com (notice that you won't be able to define the perimeter to an individual bucket, but to the whole service, meaning all the buckets within that project) and take advantage of this feature. Although, notice that it has certain limitations.
Strict VPC Firewall rules won't apply within this setup, but you could define access levels to allow access to your buckets from outside the perimeter. Such levels are based on different conditions, such as IP address or user and service accounts. However, you cannot block the access to certain ports as you could with VPC Firewall rules.

AWS keeping services on domain instead of having public

I have hosted few services on AWS however all are public and can be accessed from anywhere which is a security threat, could you please let me know how to keep the services specific to internal users of organization without any authentication medium.
I found a workaround for this, if you have list of IP range may be a network administrator can help you, take that and put them in load balancers under security group.
You should spend some time reviewing security recommendations on AWS. Some excellent sources are:
Whitepaper: AWS Security Best Practices
AWS re:Invent 2017: Best Practices for Managing Security Operations on AWS (SID206) - YouTube
AWS re:Invent 2017: Security Anti-Patterns: Mistakes to Avoid (FSV301) - YouTube
AWS operates under a [Shared Responsibility Model, which means that AWS provides many security tools and capabilities, but it is your responsibility to use them correctly!
Basic things you should understand are:
Put public-facing resources in a Public Subnet. Everything else should go into a Private Subnet.
Configure Security Groups to only open a minimum number of ports and IP ranges to the Internet.
If you only want to open resources to "internal users of organization without any authentication medium", then you should connect your organization's network to AWS via AWS Direct Connect (private fiber connection) or via an encrypted VPN connection.
Security should be your first consideration in everything you put in the cloud — and, to be honest, everything you put in your own data center, too.
Consider a LEAST PRIVILEGE approach when planning Network VPC Architecture, NACL and Firewall rules as well as IAM Access & S3 Buckets.
LEAST PRIVILEGE: Configure the minimum permission and Access required in IAM,Bucket Policies, VPC Subnets, Network ACL and Security Groups with a need to know White-list approach.
Start from having specific VPCs with 2 Main Segments of Networks 1-Public and the other 2-Private.
You will place your DMZ components on the Public segment,
Components such as Internet Facing Web Server, load Balancers,
Gateways, etc falls here.
For the Rest such as Applications, Data, or Internal Facing
LoadBalancers or WebServers make sure you place them in the Private
Subnet where you will use an Internal IP address from specified
Internal Range to refer to the Components Inside the VPC.
If you have Multiple VPCs and you want them to talk with each
other you can Peer them together.
You also can use Route53 Internal DNS to simplify naming.
Just in case, If you need to have Internet access from the Private segment
you can Configure a NAT Gateway on the public subnet and handle
Outgoing Traffic routed to Internet from the NAT Gateway.
S3 Buckets can be Configured and Servered as VPC-END points. (Routing via an Internal Network rather than Internet Routed to S3 Buckets/Object).
In IAM you can create Policies to whitelist source IP and attached to Roles and Users which is a great combination to Mix Network VPN Connections/white-listed IPs and keep Network Access in harmony with IAM. That means even Console Access could be governed by a White-listed Policy.

Restrict access to objects in S3

I would like to restrict access to objects stored in an Amazon S3 bucket.
I would like to allow all the users on our LAN (they may or may not have amazon credentials since the entire infrastructure is not on AWS). I have seen some discussion around IP address filtering and VPC endpoint. Can someone please help me here? I am not sure if I can use VPC endpoint since all users on our lan are not in Amazon VPC.
Is this possible?
Thanks
Most likely your corporate LAN uses static IP addresses. You can create S3 policies to allow access (or deny) based upon IP addresses. Here is a good AWS article on this:
Restricting Access to Specific IP Addresses
VPC Endpoints are for VPC to AWS Services connectivity (basically using Amazon's private Internet instead of the public Internet. VPC Endpoints won't help you with Corporate connectivity (except if you are using Direct Connect).
Here is how I would solve it,
Configure
Configure Users from a corporate directory who use identity federation with SAML.
Create Groups
Apply Policies to Group
This will give fine-grained control and less maintenance overhead.
This will help you not only to control S3 but any future workloads you migrate to AWS and permissions to those resources as well.
IP based filtering are prone to security risk and with high maintenance in the long run and not scalable.
EDIT:
Adding more documentation to do the above,
Integrating ADFS with AWS IAM:
https://aws.amazon.com/blogs/security/enabling-federation-to-aws-using-windows-active-directory-adfs-and-saml-2-0/
IAM Groups:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_create.html