Configure AWS Elastic Beanstalk instance to use 2 subnets - amazon-web-services

I have 3 AWS Elastic Beanstalk instances which are running Spring microservices. All microservices are making POST requests to each other and using RDS service for database.
Should I isolate database traffic and microservices traffic into separate subnets?
In case it's a good practice is it possible to assign 2 private network IP's for each subnet for every AWS Elastic Beanstalk instance?

I think you cannot do it using EBS as the instances will auto create and terminate. So you should try to create instances separately and add autoscaling policy on it.
What I usually do is create my EC2 instances in Public subnet and RDS in private subnet and use RDS Security Group and add EC2 instance's Elastic IP, so that all the traffic is going through the EC2 instance and all the traffic coming to EC2 instance is HTTPS coming from ELB.
Adding the below steps as requested:
Ok, So I am assuming you already know a bit about how to create the servers and RDS etc.
Create an EC2 instance for each of your microservices.
Attach an EIP to each of these instances.
Add an Auto-Scaling policy to increase or decrease the instances based on the traffic/CPU Utilization. Make sure you terminate the newest created instance.
Add an ELB for this instance and add HTTPS/SSL certificate to secure your traffic.
Create RDS in a Private subnet and add instance EIP in RDS SG for 3306 port.
I think you should be able to do this then.

It's not a good practice to directly communicate between instances in EB. The reason is that that EB instances run in autoscalling group. So they can be terminated and replaced at any time by AWS leading to change in their private Ip addresses.
The change in IP will break your application sooner or later. Instances in EB should be accessed using Load Balancer or private IP.
So if you have some instances that are meant for private access only you could separate them to internal EB environment.

Related

How to verify EBS and RDS in VPC?

I believe that I've successfully setup an EBS instance and RDS instance in a VPC! It is structured like this:
Elastic load balancer: public available to the internet
Elastic instance: in private subnets
RDS instance: in private subnets
What can I do, both in AWS and outside through testing, to verify that my elastic instance is protected in the VPC and my RDS instance is as well?
Thank you so much!
You can verify the following:
ensure that SGs of the instances allow only incoming traffic from the SG of ALB.
ensure that the EB instances are in private subnet, i.e. they don't have public IP.
ensure that RDS has no public IP option enabled and also it is in private subnets.
also ensure that the SG of the RDS allows only incoming connections from the EB instances.
Adding on to #Marcin's comments, I would also do the below to ensure you are following best practises:
Enabling access logs on the ELB to have some sort of logs on who is accessing the ELB. Would definitely help in troubleshooting.
Have your ec2 in ASG.
Create a certificate and terminate the HTTPS connections on ELB or you can pass the HTTPS through and terminate the SSL on the ec2.
Redirect all requests from the http port to https using the redirect feature in ELB.
Now, to answer your question on how to test the security:
try to ssh into your ec2 directly. It should not work as you are only allowing traffic from ELB and your ec2 is in private subnets.
try accessing your RDS. It should not work as it will only allow traffic from EC2 security group
Build a bastion server (a blank ec2) on AWS and try to access the EC2. It should not work as the ec2 should only allow traffic from the ELB security group
using the bastion, try accessing RDS. Same as above, it should not work as it should only allow traffic from ec2 security group.

AWS EC2 accessible only through load balanced EC2 instances [duplicate]

I have my MongoDB deployed in an EC2 instance, nice and steady. I will (hopefully) have my Elastic Beanstalk load-balanced Web App launched soon using Docker. However, I feel like my Database is too sensitive to dockerize or beastalk-ize, so I wanna keep it in a plain EC2 instance.
My issue is with regard to the security groups. How can I create a security group that will only accept MongoDB traffic (port 27017) from the Elastic Beanstalk? Since EC2 instances will get created and destroyed arbitrarily, maybe I can get the least-common subnet of those?
When you create your Elastic Beanstalk application, you will choose a security group to assign to it's EC2 instances.
For your MongoDB security group, allow traffic on port 27017 for the EB EC2's security group. If done this way, then only EC2 instances using that security group can access the MongoDB instance.
Note, when accessing your MongoDB instance from your EB app's EC2 instance, makes sure you use the private IP address of the MongoDB instance, and not the public IP address. If you use the public IP address, then AWS doesn't recognize the connection as originating from the EB security group and will deny the connection.

AWS ElasticBeanstalk Security Groups

I have a web application launched using ElasticBeanstalk (EB) with load balancer, which instances may be added/removed based on the trigger.
Now I have a Redis server hosted on EC2 with port 6379 that I only want this very EB instances (all the instances launched by this EB) have access to that port.
EB has a security group (SG) called sg-eb and Redis has a SG called sg-redis.
All these are deployed under same VPC but may or may not be the same subnet.
How to I configure sg-redis so that all the instances under the EB have access to redis? I tried adding sg-eb to sg-redis allowing port 6379 but no luck. The only way I made it work was adding each instance's public IP to sg-redis so they have access. Though, if the load balancer adds/removes an instance, I'll need to manually configure sg-redis again.
Update #1
The Redis EC2 instance will have 2 IPs, one public and one private. You can find them when selecting the instance on the EC2 management console. Make sure you connect to that EC2 instance via this internal IP.

Setup couchbase in ec2 across multiple availability zones

I am trying to setup couchbase cluster on AWS. I want my nodes to be distributed across multiple availability zones.
Ec2 instances with in an availability zone are able to access each other using the ip (Private DNS) which is assigned to them during creation and which does not change even if we restart the machine.
I am not able to access an Ec2 instance from other AZ using this (Private DNS). One way this can be done is by using Elastic Ip which has a limit per region.
Question here is How to access one Ec2 instance from other EC2 instance in another AZ without elastic ip?
You do not want to use Elastic IP for this. Your statement that Elastic IP is a solution to your issue is not correct. You want to use the Private IP assigned to the instance when you created it.
The private IP will not change as long as the instances are deployed inside a VPC.
You have to use the private IP in order to keep all network traffic inside the VPC. Then you just need to make sure your Security Groups are configured correctly to allow traffic between the instances.
Amazon Web Services Operates Split-horizon DNS (AKA Split-Brain DNS). The best practice when deploying couchbase onto EC2 is to use hostnames not IP addresses, see http://developer.couchbase.com/documentation/server/current/install/cloud-deployment.html . Amazon will automatically give a different IP when resolving the hostname depending if the source of the request is internal or external.

How Amazon ELB identifies new instances added

I am working on using a elastic load balancer along with AWS Auto scaling. I do have a setup in which instances will be scaled up/down automatically based on NetworkIn and it is working fine. I have a couple of questions regarding ELB.
How a fresh auto-scaling launched instance is registered with the ELB automatically? I know we will give the load balancer name while creating the auto-scaling group; I need to know the real 'how'.
Can we have multiple private IPs of instances run different applications and all of them visible to ELB?
Explanation for the 2). Lets say I configure the instances so that they will have multiple private IPs at the time of launch. Could they be exposed to ELB rather than the Public IP of the machine? Can ELB read the private IPs of the instances that are launched under it?
How a fresh auto-scaling launched instance is registered with the ELB automatically? I know we will give the load balancer name while creating the auto-scaling group; I need to know the real 'how'.
My guess is it makes an API call RegisterInstancesWithLoadBalancer. You can do that too in your own code. It does not have to be through Auto Scaling
Can we have multiple private IPs of instances run different applications and all of them visible to ELB?
Well, ELB does not care about the ip address at all. It goes by the Instance Id. Unless it is on a VPC and uses ENI. However, ELB routes traffic only to ip address attached to eth0
Update:
Note:
When you register a multi-homed instance (an instance that has an elastic network interface (ENI) attached) with your load balancer, the load balancer will route traffic to the primary IP address of the instance (eth0).
Source: ELB Developer Guide