Is it possible to set up a static IP address or range of IPs for server requests to external dbs and web services?
We have a web app running on Google Compute Engine managed instance group. We want to lock down access to our db to specific IP addresses. However the server IPs on the managed instance group are stateless, and the IPs change with each update to the web app. Is it possible to assign a static IP/range of IPs to our outbound server traffic?
We've investigated NAT Gateways, and VPC Peering a bit, but it doesn't seem to exactly fit our requirements. It's kind of a 'load-balancer for outbound server traffic' or something that would seem to fit what we're looking for...
Any advice greatly appreciated.
A use case would be where the IG has to connect back to private cloud that is firewall managed. Having the IG draw from a pre-set 'range' of IP's would be useful to security teams.
The security team could pre-program a range of IP's that the scalable IG would draw from.
This way the IG would draw from a present and preconfigure (on the privCloud FW) range of IPs when it scales up.
I think you can not directly make static IP's during the creation of a Managed Instance Group or by specifying them in an Instance Template. The IP's will be set as ephemeral at first.
By going to VPC Network -> External IP Addresses you can reserve IP's/easily promote them to static IPs and assign them to the specific VM's you wish.
This is a good explanation on how reserving IP's for GCE works.
You can also reserve Static Internal Addresses if you so wish.
Hope this helps.
Locking down access to specific IP's is not a good idea if you have GCP managed instance group. As of now, there is no option for giving static external ips for managed group instances. If you do so also it will be meaningless because of the autoscaling behavior.
I recommend you look into this link if you are using CloudSQL managed service from GCP for DBs.
Also if you are using non-managed/external DB use Nginx proxy
Related
What is the standard way to block an external IP from accessing my GCP cluster? Happy for the answer to include another Google service.
Because your cluster is deployed on Compute Engine instance, you can simply set a firewall rule to discard connection from a specific IP.
If you use an HTTP load balancer, you can add Cloud Armor policy to exclude some IPs.
In both case, keep in mind that IP filtering isn't very efficient. A VPN or Proxy can be easily and freely used on the internet and change the IP source of the requester.
Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.
I have four EC2 instances, three of them running api services and another running user interface (UI). The UI instance obtains the data over api calls to another instances. Right now everthing works fine becouse im using the public IP provided for eeach EC2 service for api calling. But, mi cocern is about what happend if the public ip of service change (for any reason)? then miy application go down becouse UI cannot get the data from services. After a little researching i have found that appers to be a solution: use a vpc for connect EC2 instances over private ip (because is static) and associed the UI instance to an Elastic IP (no problem here). Sow, i have some issues:
1) I make a test putting all instances in the same vpc (and sub net) but when I do ping from one to another the pings faild. Its my approach right? or i missing some thing?
2) I read a couple of another options but im not sure what is best: Maybe i have to use an Api Gateway?. Or a NAT Gateway?
3) What is the standar practice to communicate EC2 instances in private way?
1) I make a test putting all instances in the same vpc (and sub net) but when I do ping from one to another the pings faild. Its my approach right? or i missing some thing?
For security reasons, AWS block the ICMP traffic using security group. Please enable Ping traffic (ICMP) in security group from the Ip's you are trying to connect, it's better to allow the entire CIDR block for the VPC for all traffic, will make your life a lot easy. Please make sure you do this in a test Environment only.
2) I read a couple of another options but im not sure what is best: Maybe i have to use an Api Gateway?. Or a NAT Gateway?
Also, as you mentioned that your concern is that the public IP of the Instance will change, (definitely if your Instance stop/starts for any reason), but why don't you use Elastic IP for all of your Instances, that could be on of the solution, but using this approach all of your instances will be exposed to internet, so going with private IP is the best option.
3) What is the standard practice to communicate EC2 instances in private way?
It depends on the use case, if your Instances are in the same vpc no extra configuration is required, you only need to make sure the security groups, Network Access Control List and firewall configuration are correct.
In case if your instances are in different VPC, then you can use VPC Peering/Transit gateway.
1.) You need to update security groups with the permission to ICMP traffic.
Go to your VPC -> Select Security Groups -> Select the relevant security group -> Add Inbound/Outbound rule for all traffic with CIDR of the instance subnet.
2.) Internal network is the better way as long as all your traffic gonna be internal.
Thanks
Lets say I have a service running clustered on N ec2 instances. On top of that I have Amazon EKS and Elastic Loadbalancer. There is a service not managed by me running outside of AWS where I have an account that my services in AWS are using via HTTP requests. When I made an account to this external service I was asked for an IP (range) of services which will be using this external service. There is my problem. Currently lets say I have 3 EC2 instances with Elastic IP addresses (which are static), so I can just give those three IP addresses to this external service provider and everything works just fine. But in the future I might add more EC2 instances to scale out and whitelisting new IP addresses in the external service is a pain. In some cases those whitelist change requests may take for a week to approve by the external service provider and I dont have that time. Even further, accessing this external service is the only reason I go for static IPs for the EC2 instances. So if possible I would ditch the Elastic IPs.
So my question is how could I act so that if I make requests outside of the AWS in a random instance in my cluster, external service providers would always see the same IP address for me as a service consumer?
Disclaimer: I actually dont have that setup running yet, but I am in the middle of doing research if that would be a feasible option. So forgive me if my question sounds dumb for some obvious reason
Something like Network address translation (NAT) can solve your problem. A NAT gateway with Elastic IP, used for rerouting all traffic through it.
NAT gateway provided by AWS as service can be expensive if your data traffic is big, so you can make your own NAT instance, but that is bit complicated to set up and maintain.
The main difference between NAT gateway and NAT instance are listed here
The example bellow is assumed that EC2 instances are in private subnet, but it doesn't have to be a case.
I believe you need a proxy server in your environment with an Elastic IP. Basically you can use something like NGINX/Apache and configure it with an elastic IP. Configure the webserver to provide an endpoint to your EC2 instances, and doing a proxy pass to the external endpoint.
For high availability, you can manage a proxy in each availability zone, ideally configured using an auto scaling group to keep at leaset one instance alive in each AZ. Going through this approach, you will need to make sure that you assign the public IP from your elastic IP pool.
Generally, hostnames are better alternative to the IP addresses to avoid such situations as they can provide a static endpoint no matter what is the IP behind. Not sure whether you can explore that path with your external API provider. It can be challenging when there is static IP based routing/whitelisting rules in place.
This is what a NAT Gateway is for. NAT Gateways have an Elastic IP attached and allow the instances inside a VPC to make outbound connections, transparently, using the gateway's static address.
We are trying to use Elastic Load Balancing in AWS with auto-scaling so we can scale in and out as needed.
Our application consists of several smaller applications, they are all on the same subnet and the same VPC.
We want to put our ELB between one of our apps and the rest.
Problem is we want the load balancer to be working both internally between different apps using an API and also internet-facing because our application still has some usage that should be done externally and not through the API.
I've read this question but I could not figure out exactly how to do it from there, it does not really specify any steps or maybe I did understand it very well.
Can we have an ELB that is both internal and external?
For the record, I can only access this network through a VPN.
It is not possible to for an Elastic Load Balancer to have both a public IP address and a private IP address. It is one or the other, but not both.
If you want your ELB to have a private IP address, then it cannot listen to requests from the internet.
If your ELB is public-facing, you can still call to it from your internal EC2 instances using the public endpoint. However, there are some caveats that goes with this:
The traffic will exit your VPC and re-enter it. It will not be direct instance-to-ELB connection that a private IP address will afford you.
You also cannot use security groups in your security group rules.
There are 3 alternative scenarios:
Duplicate the ELB and EC2 instances, one dedicated to private traffic, one dedicated to public traffic.
Have 2 ELBs (one public, one private) that share the same back-end EC2 instances.
Don't use an ELB for either private or public traffic, and instead use an Elastic IP address (if public) or a private IP address (if private) on a single EC2 instance.
I disagree with #MattHouser answer. Actually, in a VPC, your ELB have all its internal interfaces listed in Network Interfaces with Public IP AND Primary private IP.
I've tested the private IP of my public ELB and it's working exactly like the external one.
The problem is : theses IPs are not listed anywhere in a up to date manner like on a private ELB DNS. So you have to do it by yourself.
I've made a little POC script on this, with an internal Route53 hosted zone : https://gist.github.com/darylounet/3c6253c60b7dc52da927b80a0ae8d428
I made a Lambda function that checks which private IPs are set to the loadbalancer and will update Route53 record when it changes: https://github.com/Bramzor/lambda-sync-private-elb-ips
Using this function, you can easily make use of the ELB for private traffic. I personally use it to connect multiple regions to each other over a VPC inter-region peering without needing an additional ELB.
The standard AWS solution would be to have an extra internal ELB for this.
Looks like #DaryL has an interesting workaround, but it could fail for 5 minutes if the DNS is not updated. Also there is no way to have a separate security group for the internal IPs since they share the ENI and security of the external IP of the ELB.
I faced the same challenge and I can confirm the best solution so far is to have two different ALBs, one internet-facing and the other internal. You can attach both ALBs to a single AutoScaling Group so you can access the same cluster.
Make sure the networking options (Subnets, security groups) of both ALBs are the same in order for both to access the same cluster instances. Autoscaling and Launch Configuration works seamlessly with both ALBs attached to the same AutoSacling Group. This is also working with ALBs created from ElasticBeanstalk environments.