AWS ECS: Cannot connect to webserver running in EC2 instance - amazon-web-services

I'm using the AWS CDK to create what should be a simple infrastructure:
A single EC2 instance;
Running a webserver from a Docker image;
Using Elastic Container Service (ECS) so I don't have to manage the container
I can get it all up and running but am unable to reach the webserver (e.g. by visiting the EC2 instance's IP in the browser, or with wget), and I can't figure out why I can't reach it.
Things I've tried/discovered:
In the same CDK script I can directly create an EC2 instance, run the docker image, and connect to it across the Internet. So the image and VPC are working as expected.
I've tried the AWS_VPC and BRIDGE network modes in my task definition with the same outcome (everything runs, but I can't connect to the server)
If I use AWS_VPC mode I end up with two network interfaces associated with my EC2 instance. Even if I make sure both have security groups allowing incoming port-80 traffic I still can't connect directly to the EC2 instance.
I've looked over all of the official ecs and ec2 examples. I'm doing the same things that the examples are doing, though none of the examples seem to set up a web-visible server.
I've read over the CDK docs for ecs and ec2, but couldn't find an explanation there.
I've done side-by-side comparisons of the network & security settings for a plain EC2 instance that is reachable versus the EC2-managed-by-ECS instance that isn't -- they seem to be functionally equivalent.
Everything outside of the container seems to be set up to enable talking to the EC2 instance over the web, so I'm assuming it's something about the task/container itself. My best guess is that it has something to do with the task's network mode, but I haven't found a configuration or documentation that's gotten me to the answer.
Does anyone have an idea why I can't reach this EC2 instance? Or have any example CDK scripts doing something similar to this for reference?
Here's the minimum CDK script that I expect to result in a reachable webserver (but doesn't), using a demo nginx container as a hello-world:
import {
Stack,
StackProps,
aws_ec2 as ec2,
aws_ecs as ecs,
} from 'aws-cdk-lib';
import { Construct } from 'constructs';
export class MyStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
// VPC
const vpc = new ec2.Vpc(this, 'VPC', {
enableDnsSupport: true,
enableDnsHostnames: true,
subnetConfiguration: [{ name: 'PublicSubnet', subnetType: ec2.SubnetType.PUBLIC }],
});
// ECS
const cluster = new ecs.Cluster(this, 'Cluster', {
vpc,
capacity: {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.NANO),
machineImage: ecs.EcsOptimizedImage.amazonLinux(),
desiredCapacity: 1,
},
});
cluster.connections.allowFromAnyIpv4(ec2.Port.tcp(80));
// TASK DEFINITION
const taskDefinition = new ecs.Ec2TaskDefinition(this, 'TaskDef', {
networkMode: ecs.NetworkMode.AWS_VPC,
});
const container = taskDefinition.addContainer('HelloWorldContainer', {
image: ecs.ContainerImage.fromRegistry('nginxdemos/hello'),
memoryReservationMiB: 256,
portMappings: [
{
containerPort: 80,
protocol: ecs.Protocol.TCP,
},
],
});
const service = new ecs.Ec2Service(this, 'Service', {
cluster,
taskDefinition,
});
service.connections.allowFromAnyIpv4(ec2.Port.tcp(80));
}
}

Finally found the answer:
To connect directly to an EC2 instance that is being managed by ECS, the task Network Mode needs to be "host" . That lets you use the instance like a regular EC2 instance.
In retrospect the relevant docs were pretty clear, I just didn't quite grok them the first time through.

Related

Why can't I connect to my AWS Redshift Serverless cluster from my laptop?

I've set up a Redshift Serverless cluster w/ a workgroup and a namespace.
I turned on the "Publicly Accessible" option
I've created an inbound rule for the 5439 port w/ Source set to 0.0.0.0/0
I've created an IAM credential for access to Redshift
I ran aws config and added the keys
But when I run
aws redshift-data list-databases --cluster-identifier default --database dev --db-user admin --endpoint http://default.530158470050.us-east-1.redshift-serverless.amazonaws.com:5439/dev
I get this error:
Connection was closed before we received a valid response from endpoint URL: "http://default.XXXXXX.us-east-1.redshift-serverless.amazonaws.com:5439/dev".
In Node, when trying to use the AWS.RedshiftDataClient to do the same thing, I get this:
{
code: 'TimeoutError',
path: null,
host: 'default.XXXXXXX.us-east-1.redshift-serverless.amazonaws.com',
port: 5439,
localAddress: undefined,
time: 2022-07-09T02:20:47.397Z,
region: 'us-east-1',
hostname: 'default.XXXXXX.us-east-1.redshift-serverless.amazonaws.com',
retryable: true
}
What am I missing?
What Security Group and VPC have you configured for your Redshift Serverless Cluster?
Make sure the Security Group allows traffic from "My Ip" so that you can reach the VPC.
If it is not enough, check the cluster is installed on public subnets (an Internet Gateway should be attached to the VPC and the route tables route traffic to it eventually + "Publicly Accessible" option enabled).

Listing all the existing VPCs of an account via AWS CDK

Wondering if there's a way with the AWS CDK to list all the available VPCs for the current account.
For example the CLI provides aws ec2 describe-vpcs which is very handy to retrieve all the available VPCs.
I can also import a VPC if I know its identifier (python example) :
vpc = ec2.Vpc.from_lookup(self, "vpc", vpc_id=vpc_id)
However at this point, I haven't found a way to retrieve all (or filtered) VPCs (or their ids) using the CDK. Any pointers ?
Note : we're currently passing a CIDR block string to the cdk command line so we can configure the cidr parameter of the aw2s_ec2.Vpc constructor. We would like to avoid that and let the application find the next available CIDR block on its own (or the one that was used for this deployment if previously created). For example, Vpc.private_subnets offers a way to list all private subnets (and their CIDR blocks) for an existing vpc, so I would have assumed the same could be obtained for vpcs in an AWS account.
Short answer: don't.
Long answer: This is against AWS CDK best practices. As described in the docs on the topic, CDK apps should be deterministic. That is, CDK code (along with context) in your VCS should always synth to the same template:
Determinism is key to successful AWS CDK deployments. A AWS CDK app
should have essentially the same result whenever it is deployed
(notwithstanding necessary differences based on the environment where
it's deployed).
Using AWS SDK in your CDK code breaks this determinism, so it's a good idea to rethink your approach.
The great thing about CDK, in my opinion, is the pre-processing you can do on your templates. I have done similar things by combining boto3 in my CDK code.
from aws_cdk import (
core,
core as cdk,
aws_ec2 as ec2
)
import boto3
class CdkTestStack(cdk.Stack):
def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
# The code that defines your stack goes here
client = boto3.client('ec2')
all_vpcs = client.describe_vpcs()
vpc = ec2.Vpc.from_lookup(self, "vpc", vpc_id=all_vpcs['Vpcs'][2]['VpcId'])
sg = ec2.SecurityGroup(self, 'testSG',
vpc=vpc)
sg.add_ingress_rule(peer=ec2.Peer.any_ipv4(),
connection=ec2.Port.tcp(80))
Running cdk synth, we get:
Resources:
testSG462E14A9:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: CdkTestStack/testSG
SecurityGroupEgress:
- CidrIp: 0.0.0.0/0
Description: Allow all outbound traffic by default
IpProtocol: "-1"
SecurityGroupIngress:
- CidrIp: 0.0.0.0/0
Description: from 0.0.0.0/0:80
FromPort: 80
IpProtocol: tcp
ToPort: 80
VpcId: vpc-1234567890abcdefg
Metadata:
aws:cdk:path: CdkTestStack/testSG/Resource
CDKMetadata:
:

Is there any way that I can assign Security Group and VPC to my web application hosted in Elastic Beanstalk using AWS CDK code

I am new to AWS CDK.
I have created aws code pipeline using aws cdk (Typescript) and it creates whole pipeline and deploy my application to Elastic beanstalk but the problem is it does not assign any VPC or security group to it and hence makes my application public by default.
I want my application to be accessible only through my company network using VPC that is already available in our aws account (say name of VPC is "InternalPrivateVPC") not publicly.
So I am trying to find out a way to assign already available VPC and SG to my application using aws cdk code but could not find any property or class related to Elastic beanstalk class which will allow me to assign VPC and SG to application in code.
const appName = "SampleDotNetMVCWebApp";
const app = new elasticbeanstalk.CfnApplication(this, "EBApplication", {
applicationName: appName
});
const elbEnv = new elasticbeanstalk.CfnEnvironment(this, "Environment", {
environmentName: "SampleMVCEBEnvironment",
applicationName: appName,
solutionStackName: "64bit Windows Server 2012 R2 v2.5.0 running IIS 8.5"
});
Here is the whole code repo - https://github.com/dhirajkhodade/CDKDotNetWebAppEbPipeline
and
here is specific file which creates Elastic beanstalk app and environment - https://github.com/dhirajkhodade/CDKDotNetWebAppEbPipeline/blob/master/lib/cdk_dot_net_web_app_eb_pipeline-stack.ts
I believe you will have to use optionSettings to provide VPC and Subnet Ids when creating CfnEnvironment. Also refer this page as to how option_settings can be provided. CDK defers to CFN attributes whenever necessary.
You will need ec2vpc general option setting
Something like this would work:
const elbEnv = new elasticbeanstalk.CfnEnvironment(this, "Environment", {
environmentName: "SampleMVCEBEnvironment",
applicationName: appName,
solutionStackName: "64bit Windows Server 2012 R2 v2.5.0 running IIS 8.5",
optionSettings: [
{
namespace: 'aws:ec2:vpc',
optionName: 'VPCId',
value: 'vpc-1234c'
},
{
namespace: 'aws:ec2:vpc',
optionName: 'Subnets',
value: 'subnet-1f234567'
},
{
namespace: 'aws:autoscaling:launchconfiguration',
optionName: 'SecurityGroups',
value: 'sg-7f12e34gd'
},
]
});

K8s service type ELB stuck at inprogress

Deployed K8s service with type as LoadBalancer. K8s cluster running on an EC2 instance. The service is stuck at "pending state".
Does the service type 'ELB' requires any stipulation in terms of AWS configuration parameters?
Yes. Typically you need the option --cloud-provider=aws on:
All kubelets
kube-apiserserver
kube-controller-manager
Also, you have to make sure that all your K8s instances (master/nodes) have an AWS instance role that allows them to create/remove ELBs and routes (All access to EC2 should do).
Then you need to make sure all your nodes are tagged:
Key: KubernetesCluster, Value: 'your cluster name'
Key: k8s.io/role/node, Value: 1 (For nodes only)
Key: kubernetes.io/cluster/kubernetes, Value: owned
Make sure your subnet is also tagged:
Key: KubernetesCluster, Value: 'your cluster name'
Also, your Kubernetes node definition, you should have something like this:
ProviderID: aws:///<aws-region>/<instance-id>
Generally, all of the above is not needed if you are using the Kubernetes Cloud Controller Manager which is in beta as of K8s 1.13.0

Docker container deployed via Beanstalk cannot connect to the database on RDS

I'm new to both docker and AWS. I just created my very first docker image. The application is a backend microservice with rest controllers persisting data in a MySQL database. I've manually created the database in RDS and after running the container locally, the rest APIs work fine in Postman.
Here is the Dockerfile:
FROM openjdk:8-jre-alpine
MAINTAINER alireza.online
COPY ./target/Practice-1-1.0-SNAPSHOT.jar /myApplication/
COPY ./target/libs/ /myApplication/libs/
EXPOSE 8080
CMD ["java", "-jar", "./myApplication/Practice-1-1.0-SNAPSHOT.jar"]
Then I deployed the docker image via AWS Beanstalk. Here is the Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "aliam/backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
],
"Logging": "/var/log/nginx"
}
And everything went well:
But now, I'm getting "502 Bad Gateway" in postman when trying to run "backend.us-east-2.elasticbeanstalk.com/health".
I checked the log on Beanstalk and realized that the application has problem connecting to the RDS database:
"Could not create connection to database server. Attempted reconnect 3 times. Giving up."
What I tried to do to solve the problem:
1- I tried to assign the same security group the EC2 instance is using to my RDS instance, but it didn't work.
2- I tried to make more inbound rules on the security group to add public and private IPs of the EC2 instance but I was not sure about the port and the CIDR I should define and couldn't make it.
Any comment would be highly appreciated.
Here are resources in your stack:
LoadBalancer -> EC2 instance(s) -> MySQL database
All of them need to have SecurityGroups assigned to them, allowing connections on the right ports to the upstream resources.
So, if you assign sg-1234 security group to your EC2 instances, and sg-5678 to your RDS database, there must be a rule existing in the sg-5678 allowing inbound connections from sg-1234 (no need for CIDRs, you can open a connection from SG to SG). The typical MySQL port is 3306.
Similarly, the LoadBalancer (which is automatically created for you by ElasticBeanstalk) must have access to your EC2 instance's 8080 port. Furthermore, if you want to access your instances with the "backend.us-east-2.elasticbeanstalk.com/health" domain name, the loadbalancer would have to listen on port 80 and have a target group of your instances on 8080 port.
Hope this helps!