I want to create new Cloud Armor policy for my website to prevent DDOS attack, which is hosted on GCP VM.
So how do I prevent & configure from tcp , udp flood & ICMP. Also what are the IP's to put in to blacklist.
Please suggest me,
Thanks
According to the official doucumentation
Google Cloud Armor security policies are available only for backend services behind an external HTTP(S) load balancer.
Google Cloud Armor security policies are made up of rules that filter traffic based on layer 3, 4, and 7 attributes. For example, you can specify conditions that match on an incoming request's IP address, IP range, region code, or request headers.
You can find a detailed explanation here:
Configuring Google Cloud Armor security policies
#Vittal
Regarding the last part of your question ( as the first part already answered ): "Also what are the IP's to put on to blacklist."
What to block is absolutely based on your business. But if your project is already a part of the "Adaptive Protection Plan", then you can use GCP Threat Intelligence PreConfigured rules like :
Blocking ToR Network. 2) Blocking Crawlers and Search Engine IP's 3) Blocking iplist-search-engines-crawlers
You can find more of these thread detection rules from the below article:
https://cloud.google.com/armor/docs/threat-intelligence#configure-nti
Thanks
Manoj
Related
I am evaluating Dialogflow ES Trail and created an agent, with fulfillment to explore the features.
For that, I have configured the application service in the Dialogflow console in fulfillment and specified the application endpoint URL for the service that is hosted on our secure network and environment. When a specific intent matches that have the fulfillment enabled it will invoke the service that is configured, but there is a failure "Dialogflow fulfillment error: Webhook call failed. Error: DEADLINE_EXCEEDED." since this request is getting blocked on our firewall.
Please note we are not hosted on the google cloud platform and using other cloud services and also we are using a different firewall that has custom rules.
I'm seeking assistance with whitelisting the IP addresses or DNS from which Google Dialogflow fulfillment is sending the traffic since this seems to be dynamic and changing every time the requests are getting blocked on our firewall.
I went through this documentation and tried allowing the IP Address ranges specified, but the IP addresses from which Google is sending the traffic are different. Also, it seems like this is more specific to Google Cloud Platform
https://cloud.google.com/vpc/docs/access-apis-external-ip#config
Also configuring the dynamic IP addresses ranges from these files goog.json and cloud.json hosted on the internet which keeps on updating daily seems to be difficult to handle in our firewall
https://cloud.google.com/vpc/docs/access-apis-external-ip#ip-addr-defaults
Can anyone please help me with How I can whitelist dialogflow.cloud.google.com traffic to our firewall since their IP Address and DNS is dynamic?
I recommend you to forgive this solution and to accept the traffic! Ok, surprising, let me explain.
If you whitelist the Dialogflow URL or IP, all the users that use Dialogflow will be authorized on your firewall. And because anyone can use Dialogflow, you will open the firewall to everybody.
Thus, don't waste time with that. "Don't trust the network" as Google say, but trust the authentication of the request. You can set, at least a static "API Key" on your webhook calls, it's much better than IP Filtering (even if not so strong, it's still better).
I recommend you to focus on this solution instead.
I have a VM instance that receives a lot of spam/bot traffic attempting to hack the instance such as New Request to /blog/wp-includes/wlwmanifest.xml. Although none of these are successful it adds strain to the instance.
Is it possible to block specific endpoint attempts on a google cloud network?
So far I can only find a way to block specific Ip addresses using the firewall.
I'm looking for something similar to the answer here: https://community.cloudflare.com/t/is-there-a-way-to-prevent-wp-path-probing/204761
Google Cloud Firewall works on the Level 3 OSI model, HTTP/HTTPS works on the Level 7 OSI model. As a result, you won't be able to use Google Cloud Firewall in this case.
As a solution you can use Web Application Firewall (WAF) which works on the Level 7 OSI model. Google Cloud Platform provides WAF as a service: Google Cloud Armor.
Please have a look at the documentation About Google Cloud Armor security policies:
by using the Google Cloud Armor custom rules language reference, you
can create custom conditions that match on various attributes of the
incoming traffic, such as the URL path, request method, or request
header values.
and at the section Allow or deny traffic for a request URI that matches a regular expression:
The following expression matches with requests that contain the string
bad_path in the URI:
request.path.matches('/bad_path/')
I would like to develop a Google Cloud Function that will subscribe to file changes in a Google Cloud Storage bucket and upload the file to a third party FTP site. This FTP site requires allow-listed IP addresses of clients.
As such, it is possible to get a static IP address for Google Cloud Functions containers?
Update: This feature is now available in GCP https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
First of all this is not an unreasonable request, don't get gaslighted. AWS Lambdas already support this feature and have for awhile now. If you're interested in this feature please star this feature request: https://issuetracker.google.com/issues/112629904
Secondly, we arrived at a work-around which I also posted to that issue as well, maybe this will work for you too:
Setup a VPC Connector
Create a Cloud NAT on the VPC
Create a Proxy host which does not have a public IP, so the egress traffic is routed through Cloud NAT
Configure a Cloud Function which uses the VPC Connector, and which is configured to use the Proxy server for all outbound traffic
A caveat to this approach:
We wanted to put the proxy in a Managed Instance Group and behind a GCP Internal LB so that it would dynamically scale, but GCP Support has confirmed this is not possible because the GCP ILB basically allow-lists the subnet, and the Cloud Function CIDR is outside that subnet
I hope this is helpful.
Update: Just the other day, they announced an early-access beta for this exact feature!!
"Cloud Functions PM here. We actually have an early-access preview of this feature if you'd like to test it out.
Please complete this form so we can add you..."
The form can be found in the Issue linked above.
See answer below -- it took a number of years, but this is now supported.
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
For those wanting to associate cloud functions to a static IP address in order to whitelist the IP for an API or something of the sort I recommend checking out this step by step guide which helped me a lot:
https://dev.to/alvardev/gcp-cloud-functions-with-a-static-ip-3fe9 .
I also want to specify that this solution works for Google Cloud Functions and Firebase Functions (as it is based on GCP).
This functionality is now natively part of Google Cloud Functions (see here)
It's a two-step process according to the GCF docs:
Associating function egress with a static IP address In some cases,
you might want traffic originating from your function to be associated
with a static IP address. For example, this is useful if you are
calling an external service that only allows requests from whitelisted
IP addresses.
Route your function's egress through your VPC network. See the
previous section, Routing function egress through your VPC network.
Set up Cloud NAT and specify a static IP address. Follow the guides at
Specify subnet ranges for NAT and Specify IP addresses for NAT to set
up Cloud NAT for the subnet associated with your function's Serverless
VPC Access connector.
Refer to link below:
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
As per Google, the feature has been released check out the whole thread
https://issuetracker.google.com/issues/112629904
It's not possible to assign a static IP for Google Cloud Functions, as it's pretty much orthogonal to the nature of the architecture being 'serverless' i.e. allocate and deallocate servers on demand.
You can, however, leverage a HTTP proxy to achieve a similar effect. Setup a Google Compute Engine instance, assign it a static IP and install a proxy library such as https://www.npmjs.com/package/http-proxy. You can then route all your external API calls etc through this proxy.
However, this probably reduces scale and flexibility, but it might be a workaround.
I would like to develop a Google Cloud Function that will subscribe to file changes in a Google Cloud Storage bucket and upload the file to a third party FTP site. This FTP site requires allow-listed IP addresses of clients.
As such, it is possible to get a static IP address for Google Cloud Functions containers?
Update: This feature is now available in GCP https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
First of all this is not an unreasonable request, don't get gaslighted. AWS Lambdas already support this feature and have for awhile now. If you're interested in this feature please star this feature request: https://issuetracker.google.com/issues/112629904
Secondly, we arrived at a work-around which I also posted to that issue as well, maybe this will work for you too:
Setup a VPC Connector
Create a Cloud NAT on the VPC
Create a Proxy host which does not have a public IP, so the egress traffic is routed through Cloud NAT
Configure a Cloud Function which uses the VPC Connector, and which is configured to use the Proxy server for all outbound traffic
A caveat to this approach:
We wanted to put the proxy in a Managed Instance Group and behind a GCP Internal LB so that it would dynamically scale, but GCP Support has confirmed this is not possible because the GCP ILB basically allow-lists the subnet, and the Cloud Function CIDR is outside that subnet
I hope this is helpful.
Update: Just the other day, they announced an early-access beta for this exact feature!!
"Cloud Functions PM here. We actually have an early-access preview of this feature if you'd like to test it out.
Please complete this form so we can add you..."
The form can be found in the Issue linked above.
See answer below -- it took a number of years, but this is now supported.
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
For those wanting to associate cloud functions to a static IP address in order to whitelist the IP for an API or something of the sort I recommend checking out this step by step guide which helped me a lot:
https://dev.to/alvardev/gcp-cloud-functions-with-a-static-ip-3fe9 .
I also want to specify that this solution works for Google Cloud Functions and Firebase Functions (as it is based on GCP).
This functionality is now natively part of Google Cloud Functions (see here)
It's a two-step process according to the GCF docs:
Associating function egress with a static IP address In some cases,
you might want traffic originating from your function to be associated
with a static IP address. For example, this is useful if you are
calling an external service that only allows requests from whitelisted
IP addresses.
Route your function's egress through your VPC network. See the
previous section, Routing function egress through your VPC network.
Set up Cloud NAT and specify a static IP address. Follow the guides at
Specify subnet ranges for NAT and Specify IP addresses for NAT to set
up Cloud NAT for the subnet associated with your function's Serverless
VPC Access connector.
Refer to link below:
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
As per Google, the feature has been released check out the whole thread
https://issuetracker.google.com/issues/112629904
It's not possible to assign a static IP for Google Cloud Functions, as it's pretty much orthogonal to the nature of the architecture being 'serverless' i.e. allocate and deallocate servers on demand.
You can, however, leverage a HTTP proxy to achieve a similar effect. Setup a Google Compute Engine instance, assign it a static IP and install a proxy library such as https://www.npmjs.com/package/http-proxy. You can then route all your external API calls etc through this proxy.
However, this probably reduces scale and flexibility, but it might be a workaround.
I recently start using GCP but i have one thing i can't solve.
I have: 1 VM + 1 DB Instance + 1 LB. DB instance allow only conections from the VM IP. bUT THE VM IP allow traffic from all ip (if i configure the firewall to only allow CloudFlare and LB IP's the website crash and refuse conections).
Recently i was under attack, i activate the Cloudflare ddos mode, restart all and in like 6 h the attack come back with the Cloudflare activate. Wen i see mysql conections bump from 20-30 to 254 and all conections are from the IP of the VM so i think the problem are the public accesibility of the VM but i don't know how to solved it...
If i activate my firewall rules to only allow traffic from LB and Cloudflare the web refuses all conections..
Any idea what i can do?
Thanks.
Cloud Support here, unfortunately, we do not have visibility into what is installed on your instance or what software caused the issue.
Generally speaking you're responsible for investigating the source of the vulnerability and taking steps to mitigate it.
I'm writing here some hints that will help you:
Make sure you keep your firewall rules in a sensible manner, e.g. is not a good practice to have a firewall rule to allow all ingress connections on port 22 from all source IPs for obvious reasons.
Since you've already been rooted, change all your passwords: within the Cloud SQL instance, within the GCE instance, even within the GCP project.
It's also a good idea to check who has access to your service accounts, just in case people that aren't currently working for you or your company still have access to them.
If you're using certificates revoke them, generate new ones and share them in a secure way and with the minimum required number of users.
Securing GCE instances is a shared responsability, in general, OWASP hardening guides are really good.
I'm quoting some info here from another StackOverflow thread that might be useful in your case:
General security advice for Google Cloud Platform instances:
Set user permissions at project level.
Connect securely to your instance.
Ensure the project firewall is not open to everyone on the internet.
Use a strong password and store passwords securely.
Ensure that all software is up to date.
Monitor project usage closely via the monitoring API to identify abnormal project usage.
To diagnose trouble with GCE instances, serial port output from the instance can be useful.
You can check the serial port output by clicking on the instance name
and then on "Serial port 1 (console)". Note that this logs are wipped
when instances are shutdown & rebooted, and the log is not visible
when the instance is not started.
Stackdriver monitoring is also helpful to provide an audit trail to
diagnose problems.
You can use the Stackdriver Monitoring Console to set up alerting policies matching given conditions (under which a service is considered unhealthy) that can be set up to trigger email/SMS notifications.
This quickstart for Google Compute Engine instances can be completed in ~10 minutes and shows the convenience of monitoring instances.
Here are some hints you can check on keeping GCP projects secure.