AWS Cloudformation communicating using internal IP addresses - amazon-web-services

I am trying to create a web application using AWS Cloudformation. This particular app will have 3 instances (Web server, App server, RDS database). I want these instances to be able to talk to each other. For example, the Web server should talk to App server, and the App server should talk to RDS database.
I can't understand how to configure the servers so that they know each other's IP address. I figure there are 3 ways to do this - but I'm not sure which of these is realistically possible or feasible:
I can assign a fixed private IP address (e.g. 192.168.0.2 and so on) during stack creation - this way I know beforehand the IP address of each instance
I can wait for AWS Cloudformation to return the IP addresses of the created instances and manually tweak my code to use these IP addresses to communicate using these IPs
I can somehow get the IP address of the created instance during the stack creation process and store it as a parameter in the next instance I create (not sure if Cloudformation allows this?)
Which is the best way to set this up? Also, please share a little bit of detail around how I can do this in Cloudformation.

A solution would be to place your Web server and App server behind an ELB (load balancer). This way, your web server will communicate with the app server using the ELB's URL (not the app server's IP). The app server can communicate with the RDS instance via the RDS instance's endpoint (which is again an URL).
Let's suppose you separate your infrastructure into 3 CloudFormation stacks: the RDS database, the app server and the web server. The RDS stack will expose the RDS instance's through the CloudFormation Outputs feature. This endpoint will in turn be used as an CloudFormation Parameter to the App server stack. You can insert the RDS endpoint in the App server LauchConfiguration's UserData field, so that on startup, your App server will know the RDS instance's endpoint. Finally, your App server stack will expose the App server's ELB endpoint (again using the CloudFormation outputs feature). Using the same recipe, the URL of your App server's ELB will be injected and used by your Web server stack.
As a side note, it is also a good idea to oversee your services (web server, app server) using an Autoscaling group. It is very probable that your instances will be terminated by factors out of you control. In that case, you would want the Autoscaling group to start a fresh new instance and place it behind your ELB.

Related

Allow EC2 Instances to communicate with the Services of Kubernetes deployments

I am trying to get a Windows Server EC2 instance to communicate with a running Kubernetes Service. I do not want to have to route through the internet, as both the EC2 Instance and Service are sitting within a private subnet.
I am able to get communication through when using the private IP address of the Service, but because of the nature of Kubernetes when the Service goes down, for whatever reason, the private IP can change. I want to avoid this if possible.
I either want to communicate with the service using a static private DNS name or some kind of static private IP address I can create and bind to the Service during creation. Is either of this possible to do?
P.S. I have tried looking into internal LoadBalancers, but I can't get it to work. Don't even know if this is the right direction. https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#traffic-routing. Currently I am using these annotations for EIP binding for some public-facing services.
Why not create a kubeconfig to access the EKS services through kubectl?
See documenation: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
or you want to send traffic to the services?

How to retrieve the IP address of a EC2 (Server) from Lambda acting as a Client?

The app will be using gRPC with the Server listening and the Lambda connecting to the Server. The Lambda will have access to the VPC but am not sure the best way to retrieve the Server IP Address.
VPC DNS Routing can be enabled but the actual name of the Server appears to be a function of the IP Address and can change on each re-boot.
Thanks,
Created an EC2 in a VPC with DNS enabled, name is based on an IP Address and changes with each reboot.
You could create a Route53 Private Hosted Zone to give the EC2 server(s) whatever DNS names you want within your VPC.
Or you could do something like add a specific tag to the EC2 instance(s) that the Lambda function needs to connect to, and then have the Lambda function call the AWS API to query for EC2 instances with that tag, retrieving the IP address from the response.
You could use the AWS CloudMap service which is relatively new. [1]
It is very well integrated into container services such as ECS - the scheduler manages de-/registering entries. For EC2, you might have to write a script which queries EC2 instance metadata on startup an registers the instance to CloudMap. [2]
In order to deregister an instance properly, you could put it into an Auto Scaling group and register lifecycle hooks which call the appropriate CloudMap API commands.
[1] https://aws.amazon.com/de/blogs/aws/aws-cloud-map-easily-create-and-maintain-custom-maps-of-your-applications/
[2] https://docs.aws.amazon.com/cloud-map/latest/api/API_RegisterInstance.html

Pointing a domain to securely connect to an ec2 instance running a python app

Say I have an AWS ec2 instance that is running a python application on a certain port say 8000. Also imagine I have a domain name say www.abcd.com that I own. What does it take to make my website use https and securely redirect to the app on my ec2 that is listening on port 8000? Is this even possible to do or do I need something like nginx in between?
Firstly you will need to ensure that your EC2 is in a public subnet with a public IP, it will also need its security group open on whatever port you are hitting it on (8000). At this point you should be able to hit your application on public ip:port.
Now if you want to do the above while using a domain you will want to use AWS's Route 53 service. From this you can create a DNS routing using your domain. You will want to create a route from: application.example.com to your instances public ip. After doing so you should be able to visit: application.example.com and hit your application. In doing the following it is possible now to make your EC2 instance private.
Now if you wish to include HTTPS ontop of this, the best way would be to create a public load balancer with a certificate attached, this would accept HTTPS traffic from your user, then forward that traffic over HTTP to your EC2 on a selected port (8000).
After doing this you will want to change your Route53 entry to point to your load balancer instead of directly at your EC2.
Yes, it is totally possible.
Here is step wise procedure to do it :-
you need to create hosted zone on Route-53 services of amazon
Then it use ns to connect with your domain ( wherever you have registered)
Then you need to connect your ec2 instance ip with your hosted zone
Now you can access your ec2 instances using this domain, but it will be not https
For https, you need certificate, which you can avail from aws certificate-manager
After obtaining the certificate, Follow the steps from this blog How to set up HTTPS for your domain on AWS.
NOTE:- This is just uber point, follow it and look for more insight to how you exactly do it in your case. I followed this step while deploying using elastic-beanstalk.

Can't Access DB - Elastic Beanstalk + RDS + VPC (Virtual Private Cloud)

In trying to move a website to operate via Elastic Beanstalk (ELB), I chose the t2 series of EC2 instances, and in doing so, was forced to create a Virtual Private Cloud (VPC). This site connects to a MySQL database via RDS, and I'm not having any luck getting the ELB site to access the database.
I've tried reviewing this:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html?icmpid=docs_elasticbeanstalk_console
The above link starts by saying "works great for development and testing environments, but is not ideal for a production environment", which confuses me as it doesn't say what would be better in its place - I need a database connected to the site!
It has all sorts of information and I tried several of the things it suggested regarding connecting to an existing database (not creating a new one). It mentions on step 6 of the "To modify the ingress rules on your RDS instance's security group" section to access the ingress tab, which doesn't exist for me.
I've tried editing the security group associated with the database via the RDS dashboard under "security groups", but it does not list the security groups that are associated with the VPC or the EC2 instance launched by ELB. I tried pushing the IP addresses, elastic IPs, and still can't get the site to see the database.
I'm at a loss. Can anyone explain how to connect an ELB distributed EC2 instance with an RDS database through the VPC required by t2 instances?
The statement that "This works great for development and testing environments, but is not ideal for a production environment" is just referring to having ElasticBeanstalk create the RDS instance for you. This can be done by configuring the "Database" section when creating a new EB environment.
The downside of letting EB create the RDS instance for you is that your web instance and database instance will be strongly connected, and if you ever terminate your web instance, your database will also be terminated, including all of your snapshots.
However, I think you're taking the "external" part of "external database" too literally. Your RDS instance should definitely be within the same VPC as your web instance. However, you should create it and connect your web instance to it manually. Connecting to the database involves setting five environment variables (listed below) and configuring the security group to allow connections from the web instance to the database.
The environment variables you'll need to set on your web instance are as follows:
RDS_HOSTNAME=instancename.region.rds.amazonaws.com
RDS_DB_NAME=databasename
RDS_PASSWORD=databasepassword
RDS_USERNAME=databaseuser
RDS_PORT=5432

DevOps, DNS and Public IP

I have a devops automation environment. Each successful build (web app) in Jenkins triggers a creation of EC2 (Linux) instance in AWS which is set to receive public IP and the app gets deployed on that instance. I'm calling the web application using instance's public IP. I need to mask the IP and call the app by custom name. I have created a subdomain on Route 53 subdomain.abc.com. I have three set of web apps and want to call them like one.subdomain.abc.com, two.subdomain.abc.com etc.
Since each time we have a different VM I'm not sure if EIP is an option.
Can someone please suggest a solution ?
Many thanks in advance.
If you are using just one Amazon EC2 instance for each app, then for each app you can:
Create an Elastic IP address that will be permanently used with the app
Create an A record in Amazon Route 53 to point to that Elastic IP address (eg app1.example.com)
When a new instance of the app is launched, re-associate the Elastic IP address with the new instance (assuming your old instance is then terminated)
If you wish to serve traffic from app1.example.com to several Amazon EC2 instances, then create an ALIAS record in Route 53 to point to an Elastic Load Balancer and register the EC2 instances with the load balancer.