I'm trying to build an voip application on EKS. My setup would be something like:
Asterisk -> Kamailio -> PSTN
Where it could be any number of asterisk behind kamailio, and the kamailio function is to give all the asterisk the same ip address in the eyes of the PSTN provider.
Kamailio is behind a load balancer, with a static IP address that I give to the pstn provider to authenticate my requests, but although I can receive traffic through the load balancer, when my kamailio sends a package to them, the IP is different which causes problems.
Is there a way that the load balancer and the ec2 instance running kamailio can share the same IP address?
Or there is another way of exposing kamailio eks service with a static IP address that is no trough a load balancer?
Related
I have an AWS VPC subnet in which dynamically hosts are created and destroyed. My frontend will have to connect to them. My plan was to reverse proxy to them.
Backend creates an ec2 instance in said subnet
Backend reads its internal ip address
I send the internal IP address to the frontend
Frontend connects to DNS_NAME of load balancer of this network like this: my_public_dns_name.com/internal_ip
I wanted this load balancer to terminate TLS and forward to the request to the IP address in the path of the request. The subnet has a CIDR of 16, so it is a little impracticable to add 65k addresses manually for forwarding.
I couldn't figure out how to configure AWS application load balancer to do this. Is that even possible with it or do I have to use my own reverse proxy on an instance?
We have a google cloud setup currently that has a backend service (a managed instance group of 3 vm machines) sitting behind a TCP load balancer. In the frontend configuration we have a static external ip address that redirects traffic to port 6443 and a tcp health check on the port 6443 attached to the load balancer as well. Out of the 3 vm instance in the backend service initially only one of the machine has a service running on port 6443. So the load balancer correctly detects one health instance and 2 unhealthy instances in the load balancer details page. In order to bring up a service on the unhealthy vm instances on port 6443 we need to connect to the healthy vm instance via the load balancer ip and the same port. If we connect to the vm instances using the load balancer ip and the port 6443 from outside the load balancer (not from the backend instances) we can see the connection going through successfully to the health vm instance in the backend service. However when trying to connect to the load balancer from one of the unhealth vm instances from the backend service we can see that the connection is re-directed back to the same instance instead of the healthy vm instance causing a connection refused error. Is there a setting in google cloud tcp load balancer to allow connections on the load balancer ip and service port to be always redirected to one of the healthy instances even if the source of the request is from behind the load balancer?
As I understand, you have configured a TCP load balancer with MIG as the backend.
In TCP load balancer,it routes original connections directly from clients to the healthy backends, without any interruption and responses from the backend VMs go directly to the clients, not back through the load balancer.
It's expected behavior that If the client VM is a backend VM of the load balancer and connections sent to the load balancer's IP address forwarding rule are always answered by the backend VM itself, this happens regardless of whether the backend VM is healthy or not. It happens for all traffic sent to the load balancer's IP address, not just traffic on the protocol and ports specified in the load balancer's forwarding rule.
For troubleshooting purposes, I would suggest you add another test VM within a different VPC network in your project and try to connect to load balancer external IP address from the test VM.
In terms of general network this is not possible.
As a basic example, suppose you have 3 internal private IP addresses, lets call them 192.168.1.2, 192.168.1.3, and 192.168.1.4 for each of your VMs.
The say the are all load balanced using a shared Virtual IP address of 192.168.1.5.
Now your router sits at 192.168.1.1 and it has a public IP of 10.10.10.1 and has been configured to direct incoming external traffic for a given port, say external port 80 for HTTP traffic to the the load balanced IP on some private port, say 6443 for your use case.
Suppose 192.168.1.2, 192.168.1.3 are not healthy and 192.168.1.4 is.
192.168.1.2 sends a request to 10.10.10.1:80.
Your router will now pick up that traffic on 192.168.1.1 and send out a packet to 10.10.10.1:80.
Now the packet comes in on the routers interface at 10.10.10.1:80 and router then routers the packet, per your NAT/Port Forward rules to 192.168.1.5:6443.
The packet goes out on the wire to 192.168.1.5:6443 and since the only healthy host is 192.168.1.4, the packet gets processed by this host.
Any LB traffic from a backend will get looped back by the guest OS. The external routing mechanism does not see this traffic and this behavior cannot be changed. This is because the LB IP is part of the local routing table and the OS will always loopback destinations in this table. This is regardless of the backend's health state.
I have ECS service running in AWS and I am going to create application load balancer for this service. I have read through this doc: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html but what I don't quite understand is how I can specify an entry endpoint IP address to my load balancer. This IP address will be used by client to send requests to my service. Based on my understanding, the IP should be configured in load balancer not my ECS service's task.
Using an IP address for connecting to an elastic load balancer is a bad idea. ELBs are elastic, which means there are multiple instances behind a single load balancer for fault tolerance. That's the reason AWS recommends to use the Hostname instead of IP address.
If you still want to test the connetivity using load balancer IP address, you can try the nslookup command
nslookup yourELBPublicDNS
This will give you multiple addresses back, you can try to hit one. But keep in mind that those IP addresses may change. The reason is simple, if the underlying host for the load balancer fails, it will be replaced by a new one, which most likely will have new IP. But what remains constant is the domain name, so using the hostname is recommended.
As mentioned in the answer IP is bad idea but not if its static IP. As NLB support static IP while application LB does not support static IP.
If you are looking for static IP, then you need to place network LB in the top of application LB, application LB will communicate with backend ECS services while the NLB will be for the client. The client will able to communicate using the static IP of NLB that will not change.
Against each availability zone, you have static IP for NLB, you can check further integration here.
If you are looking for allowing specific IP to use your Endpoint then you need AWS application firewall.
I have elastic beanstalk service (HTTP) with elastic IP address assigned.
I need to have service with SSL certificate, so I created app load balancer.
APP Load Balancer (HTTPS) >> EC2 (HTTP)
Is it possible to have public static IP addresses for my HTTPS service?
No, if you are terminating SSL on your load balancer this is not possible.
It may be possible to use a Network Load Balancer (NLB) with a proxy behind it which would allow you to use static IPs, but this seems overly complicated. Why do you need static IPs?
The architecture would look like:
NLB --TCP--> Proxy Layer --TCP--> ELB(SSL) --HTTP--> Back End
NLB layer can have static IPs
The proxy layer (HAProxy) in an autoscaling group forwards connections to the ELB
ELB does the SSL temination
Finally, the back end services in their own ASG
I'm not sure if this would be possible with Beanstalk though.
I have a set-up running on Amazon cloud with a couple of EC2 Instances running through a load balancer.
It is important that the site has a unique(static) IP or set of IPs as I'm plugging in 3rd party APIs which only accept requests made from IPs which have been added to their whitelist.
So basically unless we can give these 3rd parties a static IP or range of IPs that the requests from the site will always come from then we would be unable to make any calls to them.
Anyone knows how to achieve this as I know that Elastic IPs are not compatible with load balancers?
If I were to look up the IP of the load balancer DNS name (e.g. dualstack.awseb-BAMobile-ENV-xxxxxxxxx.eu-west-1.elb.amazonaws.com resolves to 200.200.200.200) would that IP be Static?
Any help/advise is greatly appreciated guys.
The ip addresses of your load balancer is not static. In any event, your incoming load balancer IP wouldn't be used for outgoing connections.
You could assign elastic IPs to the actual instances behind the load balancer, which would then be used for outgoing requests. You get 5 free elastic ips, and I believe you can apply for more if you need them.
Additionally if using a VPC and if your instances are in a private subnet then they will only be able to access the internet via the NAT instance(s) you setup, and you can of course assign an elastic IP to the NAT instances
This is an old question, but things have changed now.
Now you can create a Network ELB to get a LB with a static IP.
from https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Support for static IP addresses for the load balancer. You can also
assign one Elastic IP address per subnet enabled for the load
balancer.
https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/
You can attache an additional ENI (Elastic Network Interface) to an instance in your VPC. This way the ELB (Elastic Load Balancer) routes the incoming Internet requests to the web server, and the additional ENI will be used to connect to your 3rd party (or internal) requests (Management network)
You can see more details about it in the VPC documentations
Really the only way I am aware of doing this is by setting up your instances within a VPC and having dedicated NAT instances by which all outbound traffic is routed.
Here is a link to the AWS documentation on how to set up NAT instances:
http://docs.amazonwebservices.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
You CAN attach an elastic IP to the instances BUT NOT to the ELB (which is what the client sees).
You could use a full reverse proxy layer 7 load balancer like HAProxy:
Or a commercial implementation like Loadbalancer.org or Riverbed (Zeus)
They both are in the AWS Marketplace:
Your outbound requests to your 3rd party APIs will NOT go out via the ELB/ALB. That's for incoming connections. If you need an inbound static IP you'll probably need to forego the loadbalancer (or figure out how to implement Anshu's suggestion to attach an elastic IP to the loadbalancers, the doc is light on details). Update: I found some documentation that ALB use static addresses (and I just tried binding an elastic IP to one to be sure and that failed).
If you're talking about outbound connections see below:
If your server is deployed in a public subnet you can attach an
elastic IP to that host. Outbound communications will go out over
that address.
If your server is deployed in a private subnet there's
a NAT gateway attached to it. All outbound traffic from your private
subnet will go out over that interface.
You could use as already mentioned loadbalancer.org appliance in AWS. It would replace the AWS NAT instance and give greater functionality and include both Layer4 and Layer7, along with SSL termination and a WAF.
Best of all you get free support in your 30 day trial in AWS to help you get up and running.
Yes I am biased as I work for loadbalancer.org however I would say nothing ventured nothing gained.
You can use a DNS service like DNSMadeeasy that allows "ANAME" records. These act like an A Record but can be pointed at a FQDN or IP. So in this case you can point it to the ELB DNS.
Dave