Instance not able to communicate - Same VPC & Subnet, Different Security Group - amazon-web-services

I have created a CloudFormation template and deployed it successfully. I have two EC2 Instances in SAME VPC, SAME SUBNET but different security group. One of the EC2 instance is MongoDB server installed on it, other one have the node server running. I am able to access both instances without any issue, problem happens when I try to connect to MongoDB from Node Server. It doesn't work. I have drilled down the issue that both the servers are not able to connect to each other. Below are my security group for
DB Server
Application Server
I have already visited below threads in this regards but it did not help.
EC2 instance can't connect to RDS, from same VPC/Subnet
CloudFormation - Security Group VPC issue

You aren't allowing outgoing traffic from your application server over port 12077. I would really recommend deleting all the SecurityGroupEgress rules and allowing the default of all egress allowed.

Related

AWS EC2 accessible only through load balanced EC2 instances [duplicate]

I have my MongoDB deployed in an EC2 instance, nice and steady. I will (hopefully) have my Elastic Beanstalk load-balanced Web App launched soon using Docker. However, I feel like my Database is too sensitive to dockerize or beastalk-ize, so I wanna keep it in a plain EC2 instance.
My issue is with regard to the security groups. How can I create a security group that will only accept MongoDB traffic (port 27017) from the Elastic Beanstalk? Since EC2 instances will get created and destroyed arbitrarily, maybe I can get the least-common subnet of those?
When you create your Elastic Beanstalk application, you will choose a security group to assign to it's EC2 instances.
For your MongoDB security group, allow traffic on port 27017 for the EB EC2's security group. If done this way, then only EC2 instances using that security group can access the MongoDB instance.
Note, when accessing your MongoDB instance from your EB app's EC2 instance, makes sure you use the private IP address of the MongoDB instance, and not the public IP address. If you use the public IP address, then AWS doesn't recognize the connection as originating from the EB security group and will deny the connection.

AWS ElasticBeanstalk Security Groups

I have a web application launched using ElasticBeanstalk (EB) with load balancer, which instances may be added/removed based on the trigger.
Now I have a Redis server hosted on EC2 with port 6379 that I only want this very EB instances (all the instances launched by this EB) have access to that port.
EB has a security group (SG) called sg-eb and Redis has a SG called sg-redis.
All these are deployed under same VPC but may or may not be the same subnet.
How to I configure sg-redis so that all the instances under the EB have access to redis? I tried adding sg-eb to sg-redis allowing port 6379 but no luck. The only way I made it work was adding each instance's public IP to sg-redis so they have access. Though, if the load balancer adds/removes an instance, I'll need to manually configure sg-redis again.
Update #1
The Redis EC2 instance will have 2 IPs, one public and one private. You can find them when selecting the instance on the EC2 management console. Make sure you connect to that EC2 instance via this internal IP.

How to connect to a RDS instance in a different AWS region

I have an EC2 instance in one region, N.California. I have an RDS instance in EU/Ireland. I am not able to connect to the RDS instance from the EC2 instance, the connection times out. This tutorial by aws says that I would need to use the Public IP of the RDS instance in order to connect to it. But this public IP is not available on the AWS console, I'm not even sure if we're supposed to be using any other than RDS endpoints. We're also disallowed from adding a security group from one region to the security group of another.
I am really unsure about how to proceed.
I am answering my own question because my own solution worked, and it might be of use to someone else considering that the AWS tutorial is wrong.
To connect, add a custom tcp rule over port 3306 in the security group Ingress Rules for the RDS instance, with the EIP of your EC2 instance as the allowed host. Voila.

EC2 instance can't access to elasticache

As the title suggests, I'm struggling to connect to my elasticache instance via my EC2 instance. I have a orm to connect to redis in my EC2 instance that was just failing on my logs, so I sshed into my EC2 instance to try to manually connect to the redis instance and got a timeout:
Could not connect to Redis at <redis uri>: Connection timed out
They're in different VPC's (the elasticache instance and the EC2 instance), but in my elasticache instance's security group, I have a custom TCP inbound rule at port 6379 from any source.
Halp.
You setup the security rule, but did you setup the VPC peering properly:
A VPC peering connection is a networking connection between two VPCs
that enables you to route traffic between them using private IP
addresses. Instances in either VPC can communicate with each other as
if they are within the same network. You can create a VPC peering
connection between your own VPCs, or with a VPC in another AWS account
within a single region.
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html
After you create VPC Peer connection, you also need to modify routing table.
Keep in mind that you need to modify BOTH of the routing tables.
Also you need to add CIDR of the local VPC.
It can be confusing which is "local" VPC and which is "target".
In my case, the local VPC contained EC2 instances that needed Redis database in other VPC. After creating peer connection in this format, I needed to do two things:
edit routing table for both local and target VPC.
edit security group of Redis database to accept connections from local VPC.
If set accordingly, you should be able to connect from EC2 instance at local VPC to Redis database in target VPC.
Here is documentation from AWS that is relatively easy to follow:
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/vpc-pg.pdf
Your scenario can be found on page 16.

How to connect to AWS elasticache?

Could someone give a step-by-step procedure for connecting to elasticache.
I'm trying to connect to a redis elasticache node from inside my EC2 instance (sshed in). I'm getting Connection Timed Out errors each time, and I can't figure out what's wrong with how I've configured my AWS settings.
They are in different VPCs, but in my elasticache VPC, I have a custom TCP inbound rule at port 6379 to accept from anywhere. And the two VPCs share an Active Peer connection that I set up. What more am I intended to do?
EDIT:
I am trying to connect via the redis-cli command. I sshed in because I was originally trying to connect via the node-redis module since my EC2 instance hosts a node server. So officially my two attempts are 1. A scripted module and 2. The redis-cli command provided in the AWS documentation.
As far as I can tell, I have also set up the route tables correctly according to this: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html#route-tables-vpc-peering
You cannot connect to Elasticache from outside its VPC. It's a weird design decision on AWS' part, and although it's not documented well, it is documented here:
Amazon ElastiCache Nodes, deployed within a VPC, can never be accessed from the Internet or from EC2 Instances outside the VPC.
You can set your security groups to allow connections from everywhere, and it will look like it worked, but it won't matter or let you actually connect from outside the VPC (also a weird design decision).
In your Redis cluster properties you have a reference to the Security Group. Copy it.
In our EC2 instance you also have a Security Group. You should edit this Security Group and add the ID of the Redis Security Group as CIDR in the outbound connections + the port 6379.
This way the two Security Groups are linked and the connection can be established.
Two things we might forget when trying to connect to ElasticCache,
Configuring inbound TCP rule to allow incoming requests on port 6379
Adding EC2 security group in ElasticCache instance
Second one helped me.
Reference to (2) : https://www.youtube.com/watch?v=fxjsxtcgDoc&ab_channel=HendyIrawanSocialEnterprise
Here is step-by-step instructions for connection to Redis Elasticache cluster from EC2 inctance located in the same VPC as Elasticache:
Connect to a Elasticache Redis Cluster's Node