AWS free instance(Ubuntu) inbound and outbound request - amazon-web-services

I'm new to AWS.
I have a django application with API, I deployed it in heroku(free instance), for accessing the api url from other external resource we need static IP address,i came to know that heroku IP is not static , it keep on changing dynamically, to get static IP in heroku there is an add-on called QuotaGuard Static , will provide inbound and outbound request for the API,
Like wise I want to know whether AWS free instance has static IP for inbound and outbound request for the API or not? Or like heroku we need to add any add-on for it. Can you guys please suggest me whether static IP is available for free instance in AWS or we need to go for paid service. Thanks in advance.

The public IPs are static and do not change over the life of an EC2 instance.
However if you terminate an instance and spawn a new one then the public IP will change.
To overcome this you can use elastic IP.
Elastic IPs do not change (its like they are reserved for you)
The cool thing about elastic IP is that they are charged if not attached to any instance, but the moment you attach them to an instance they are free.
So if you use elastic IPs then you can use one static IP and it will not change even when you terminate the underlying EC2 instance and attach newly created EC2 instance

Related

Allow EC2 Instances to communicate with the Services of Kubernetes deployments

I am trying to get a Windows Server EC2 instance to communicate with a running Kubernetes Service. I do not want to have to route through the internet, as both the EC2 Instance and Service are sitting within a private subnet.
I am able to get communication through when using the private IP address of the Service, but because of the nature of Kubernetes when the Service goes down, for whatever reason, the private IP can change. I want to avoid this if possible.
I either want to communicate with the service using a static private DNS name or some kind of static private IP address I can create and bind to the Service during creation. Is either of this possible to do?
P.S. I have tried looking into internal LoadBalancers, but I can't get it to work. Don't even know if this is the right direction. https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#traffic-routing. Currently I am using these annotations for EIP binding for some public-facing services.
Why not create a kubeconfig to access the EKS services through kubectl?
See documenation: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
or you want to send traffic to the services?

How can i use my EC2 Elastic IP address as proxy server?

Please i need help in using elastic IP(assigned to instance) from AWS as proxy server, the first picture shows how other people are doing it. Users on my website will use it to login to another website that requires static IP(Doesn't change). I want to know how to authenticate EC2 elastic IP and use it as proxy server, all i have so far is i need to assign elastic IP to my instance on AWS, but i don't know how to authenticate the IPs once created. And i don't want to buy static IP from other company like brightdata or smartproxy.
This picture shows how other people are doing it, and the IP is from amazon:
Does your Public IP change that often? Can you just have them give you their 'IP Chicken' address and just use that?
I think based on what you are describing you want to whitelist IPs via an ACL. Here are some steps you can use:
https://www.purevpn.com/blog/whitelist-ip-addresses-on-aws/
If you do not want to use their Public IP from wherever they are, here is another workflow:
VPN to a Bastion Host (Jump Host)
Put the Public IP you whitelist to your application on the Jump host
Require your users access the Jump Host to access the website.
You can Use Amazon Elastic IPs without issue. It is a good idea.
Public IP addresses from AWS can only be used on AWS services (eg Amazon EC2 instances). They cannot be used on external services outside of AWS.
In your picture, the IP address is assigned to an Amazon EC2 instance.

Amazon MQ - Does the private IP change after a reboot?

I'm using the Amazon MQ managed service and have a question as to how MQ behaves on a reboot.
Will the private IP of the broker change or is it static?
I'm using Amazon MQ inside of a VPC.
Assuming you're using a single instance broker it will most likely stay the same. I couldn't find a direct documentation reference for this, but Amazon MQ broker nodes are managed EC2 instances. An EC2 instance by default retains the private IP inside a VPC over its lifecycle.
The problem is that you don't control the lifecycle of the instance. If the instance is broken beyond repair, Amazon MQ may set up a new instance for you, which will get a different private IP address inside the VPC, but that should be rare. After a simple reboot that would be very unlikely.
If you're using an active/standby cluster what I said concerning the IPs of the individual nodes should still be true, but the whoever the active node is may change.
If you need a hard guarantee that the IP addresses don't change, you can set up a private Network Load Balancer in front of your cluster. From the docs (emphasis mine):
When you create an internal load balancer, you can optionally specify one private IP address per subnet. If you do not specify an IP address from the subnet, Elastic Load Balancing chooses one for you. These private IP addresses provide your load balancer with static IP addresses that will not change during the life of the load balancer. You cannot change these private IP addresses after you create the load balancer.
For most services in AWS you want to use the DNS name or CNAME to a service instead of any IP address unless there's a static IP address attached to it.

Accessing IP restricted external services from EC2

Lets say I have a service running clustered on N ec2 instances. On top of that I have Amazon EKS and Elastic Loadbalancer. There is a service not managed by me running outside of AWS where I have an account that my services in AWS are using via HTTP requests. When I made an account to this external service I was asked for an IP (range) of services which will be using this external service. There is my problem. Currently lets say I have 3 EC2 instances with Elastic IP addresses (which are static), so I can just give those three IP addresses to this external service provider and everything works just fine. But in the future I might add more EC2 instances to scale out and whitelisting new IP addresses in the external service is a pain. In some cases those whitelist change requests may take for a week to approve by the external service provider and I dont have that time. Even further, accessing this external service is the only reason I go for static IPs for the EC2 instances. So if possible I would ditch the Elastic IPs.
So my question is how could I act so that if I make requests outside of the AWS in a random instance in my cluster, external service providers would always see the same IP address for me as a service consumer?
Disclaimer: I actually dont have that setup running yet, but I am in the middle of doing research if that would be a feasible option. So forgive me if my question sounds dumb for some obvious reason
Something like Network address translation (NAT) can solve your problem. A NAT gateway with Elastic IP, used for rerouting all traffic through it.
NAT gateway provided by AWS as service can be expensive if your data traffic is big, so you can make your own NAT instance, but that is bit complicated to set up and maintain.
The main difference between NAT gateway and NAT instance are listed here
The example bellow is assumed that EC2 instances are in private subnet, but it doesn't have to be a case.
I believe you need a proxy server in your environment with an Elastic IP. Basically you can use something like NGINX/Apache and configure it with an elastic IP. Configure the webserver to provide an endpoint to your EC2 instances, and doing a proxy pass to the external endpoint.
For high availability, you can manage a proxy in each availability zone, ideally configured using an auto scaling group to keep at leaset one instance alive in each AZ. Going through this approach, you will need to make sure that you assign the public IP from your elastic IP pool.
Generally, hostnames are better alternative to the IP addresses to avoid such situations as they can provide a static endpoint no matter what is the IP behind. Not sure whether you can explore that path with your external API provider. It can be challenging when there is static IP based routing/whitelisting rules in place.
This is what a NAT Gateway is for. NAT Gateways have an Elastic IP attached and allow the instances inside a VPC to make outbound connections, transparently, using the gateway's static address.

Create static IP for Google Places API keys restrictions on AWS Elasticbeanstalk with autoscaling

I use Google Places API, and I need to put a restriction on my API keys, more specifically an IP restriction because the calls are from a web server.
I am using AWS Elastic Beanstalk with an environment where I have a Load Balancer, Autoscaling, and a VPC. So the IP address changes every time a new EC2 server is created.
My question is :
How do I put a static IP (Elastic ?) on my environment?
I have found many similar posts like this one (https://stackoverflow.com/a/49200693/3954420) or this one (https://medium.com/#obezuk/how-to-use-elastic-beanstalk-with-an-ip-whitelisted-api-69a6f8b5f844) where I have to create a NAT Gateway.
But it requires at the end to type a target IP address, and unfortunately, Google API servers IPs are not static.
How can I use NAT Gateway or is there another way ?
Thanks