How to add GCP instances to AWS load balancer? - amazon-web-services

Majority of my servers are on AWS for which I am using classic load balancer. I have few instances running on GCP too. How to add those GCP instances to AWS load balancer?

You cannot with the Classic Load Balancer. You can with the new Network Load Balancer provided that your Google instances are reachable via public IP addresses.
[EDIT after #michael's comment]
I have not actually tested NLB with Google instances. From the Amazon documentation you can load balance Amazon resources with on premises resources using IP addresses. I am assuming that this means that Google instances would be supported if they have public IP address.
Relevant text:
Load Balancing using IP addresses as Targets
You can load balance any application hosted in AWS or on-premises using IP addresses of the application backends as targets. This allows load balancing to an application backend hosted on any IP address and any interface on an instance. Each application hosted on the same instance can have an associated security group and use the same port. You can also use IP addresses as targets to load balance applications hosted in on-premises locations (over a Direct Connect connection) and EC2-Classic (using ClassicLink). The ability to load balance across AWS and on-prem resources helps you migrate-to-cloud, burst-to-cloud or failover-to-cloud.
Load Balancing using IP addresses as Targets

Related

Load Balancing of 2 instances in AWS

I have two VM's (in AWS cloud) connected to single DB. Each VM is having same application running. I want to load balance those two VM's and route based on the traffic. (Like if traffic is more on one VM instance then it should switch to another VM).
Currently I am accessing 2 different instances with 2 different IP addresses with HTTP. Now I want to access those 2 VM's with HTTPS and route the instances with same DNS like (https://dns name/service1/),
(https://dns name/service2/)
How can I do load balancing using nginx ingress.
I am new to AWS cloud. Can someone help me or guide me or suggest me some appropriate related references in getting the solution to it.
AWS offers an Elastic Load Balancing service.
From What is Elastic Load Balancing? - Elastic Load Balancing:
Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It can automatically scale to the vast majority of workloads.
You can use this ELB service instead of running another Amazon EC2 instance with nginx. (Charges apply.)
Alternatively, you could configure your domain name on Amazon Route 53 to use Weighted routing:
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
This would distribute the traffic when resolving the DNS Name rather than using a Load Balancer. It's not quite the same because DNS information is cached, so the same client would continue to be redirected to the same server until the cache is cleared. However, it is practically free to use.

Google cloud-Internal Load balancer connectivity issue

I have created 2 VMs (Webserver) in GCP on one region & TCP internal load balancer configured in the same region and created another vm on another region(southeast asia). Now, I am not able to ping load balancer IP but I am able to ping the webserver IP.
Webserver1---region :us-central ----10.128.0.5 &
Webserver2---region: us-central------10.128.0.6 &
Internal load balancer ip----------10.128.0.13
Test machine---region: southeast asia----10.148.0.5
I understand that by "pinging the load balancer " you mean you want to check the healt of the load balancer.However pinging a load balancer is not possible as it is a virtual part of a network and it is not a seperate device.however u can check the health of the load balancer using the instrucctions from the following docs.
https://cloud.google.com/load-balancing/docs/internal/setting-up-internal
hope that was helpful.
GCP internal load balancer by design handles traffic within the same region. As your test machine is in a different region(Southeast-asia), you cannot ping the internal load balancer which is present in the US-CENTRAL1 region.
Internal HTTP(S) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on Compute Engine and Google Kubernetes Engine (GKE). The load balancer is accessible only in the chosen region of your Virtual Private Cloud (VPC) network on an internal IP address.
Kindly refer to the below link for:
Internal TCP/UDP Load Balancing overview :
https://cloud.google.com/load-balancing/docs/l7-internal
Troubleshooting Internal TCP/UDP Load Balancing
https://cloud.google.com/load-balancing/docs/internal/troubleshooting-ilb

AWS Global Accelerator static IP not working

So I have just setup an application load balancer but I need a static IP to whitelist my database connection, I found Global Accelerator can do the job so I have set it up and assigned it to the ALB. All showing fine in the console but when I ping my domain (www.example.com), I don't see either of the 2 static IP's assigned... and when I whitelist both IP's my application still cannot connect.
Am I doing something wrong?
Edit: My database is Mongo DB hosted on the Atlas Cloud. In my staging environment I have secured the connection to a single server instance using that servers IP address. Now I'm moving to a production environment with a load balancer, I'm not quite sure how I would achieve the same result, since I have multiple EC2 instances which can be created/destroyed via autoscaling. My thinking is that I need to whitelist the load balancer IP address rather than individual instances.
I am assuming that your architecture is:
Domain name pointing to an Application Load Balancer in AWS
Load Balancer points to an Auto Scaling group of Amazon EC2 instances
The EC2 instances point to your MongoDB database hosted on the Atlas Cloud
You want an static IP address so that the database can permit access from the Amazon EC2 instances
While incoming traffic to the EC2 instances goes through the Load Balancer, please note that the connection from an EC2 instance to the database is a separate outbound connection that is established to the database. This traffic does not go through the Load Balancer. The only traffic coming 'out' of a Load Balancer is the response to requests that came 'in'.
The typical way to implement this architecture is:
Load Balancer in public subnets
Auto-Scaled Amazon EC2 instances in private subnets
A NAT Gateway in the public subnet(s)
This way, the instances in the private subnets can access the Internet via the NAT Gateway, yet they are fully isolated from traffic coming in from the Internet. It has the additional benefit that the NAT Gateway has a static IP address. All traffic going through the NAT Gateway to the Internet will 'appear' to be coming from this IP address.
For fault tolerance, it is recommended to put a NAT Gateway in at least two Availability Zones. Each will have its own static IP address.
Oh, and you could consider moving your database to Amazon DocumentDB (with MongoDB Compatibility), which would reduce latency between the application servers and the database.

Assign a static IP address to an AWS Application Load Balancer

I have a load balancer created in my VPC with two subnets. Now I want to open a firewall rule from within my company intranet. I have no control over this firewall. But to open a rule on the firewall - the firewall team is accepting only ip addresses and not the DNS names.
But since the IP address of the load balancer keeps changing I can't give it to the firewall team. That's where I am stuck.
How can I open a firewall rule to an AWS load balancer from within my intranet?
You are correct that an Application Load Balancer does not provide static IP addresses.
You might be able to change to a Network Load Balancer:
Elastic Load Balancing creates a network interface for each Availability Zone you enable. Each load balancer node in the Availability Zone uses this network interface to get a static IP address. When you create an Internet-facing load balancer, you can optionally associate one Elastic IP address per subnet.
It is also possible to put a Network Load Balancer in front of an Application Load Balancer to gain the benefits of both.
See: Using static IP addresses for Application Load Balancers | AWS Networking & Content Delivery Blog

Adding a public static ipv4 address to an AWS load balancer

I have a load balancer configured to have an IPV4 Ip address. However, the provided IP is a DNS mapped IP address to the load balancer of the format *.ap-south-1.elb.amazonaws.com.
I need to configure IOT devices to send data to the load balancer and they do not support DNS. How can I assign a static IP address like ... to my load balancer so that I can configure my IOT devices to send data to it.
The Elastic IPs section does not provide a facility to allocate it to a load balancer and only supports ec2 instances.
Conclusion:
I have found a way to use DNS on my IOT device and working on this was vital. I am now aware of the option of manually hosting a load-balancer on an EC2 instance. A simper alternative is forwarding all requests at an elastic IP addressed EC2 instance to the load balancer. However, this will cause a bottleneck at the transparent proxy. Hence, I think using the DNS feature on the IOT device is the best option.
Elastic Load Balancers do not support static IP addresses. They only support DNS CNAMEs (or Aliases if you are using Route 53). This is because ELB DNS entries will resolve to different IP addresses depending on how it is scaling between availability zones. Also, over time, the IP addresses will/may change.
The AWS documentation also specifically states to create CNAME-records only when mapping custom DNS entries to your ELB. If you are using Route 53, you can create an Alias record, which look like an A-record to the outside world.
If you need a static IP address, then you cannot use ELB.
Instead, you will need to manage your own load balancer (HAProxy, nginx, etc.) on an EC2 instance using an Elastic IP address.
It would not be possible to assign a static IP with the elastic load balancer. You need to use DNS name only.
The only way I am aware of doing this is by setting up your instances within a VPC and having dedicated NAT instances by which all outbound traffic is routed.
Here is a link to the AWS documentation on how to set up NAT instances:
http://docs.amazonwebservices.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
AWS Elastic Load Balancer does not support assigning a static IP address due to many reasons.
Looking at your problem, the issue you are facing is having large amount of data sources to pump data to AWS. I suggest you to use AWS Kinesis Firehose service instead of the current approach as Firehose specifically focus on streaming data into AWS.