I have two drop wizard services in running in AWS ECS instances and I need to use the single ALB for both of these services running in their own docker container and as part of the same ECS cluster.
In load balancer setting I can't see how can I map it to two services, please guide me as I need this to have less overhead and for coast saving purposes?
You will need to add a listener for each container port mapping, even if they're the same host.
You will need to add the additional listener(s) after the wizard.
Related
I'm new to AWS and I am trying to gauge what migrating our existing applications into AWS would look like. I'm trying to host multiple apps as Services under a single ECS cluster, and use one Application Load Balancer with hostname rules to route requests to the correct container.
I was originally thinking I could give each service its own Target Group, but I ran into the RESOURCE:ENI error, which from what I can tell means that I can't just attach as many Target Groups as I want to the same cluster.
I don't want to create a separate cluster for each app, or use separate load balancers for them because these apps are very small and receive little to no traffic so it just wouldn't make sense. Even the minimum of 0.25 vCPU/0.5 GB that Fargate has is overkill for these apps.
What's the best way to host many apps under one ECS cluster and one Load Balancer? Is it best to create my own reverse-proxy server to do the routing to different apps?
You are likely using awsvpc network mode for the task definitions. You could change it to the (default) bridge mode instead. Your services don't seem to be ones that would need the added network performance boost of using the native EC2 networking stack.
The target groups' target types should be instance as per my understanding.
Suppose I have two apps launched via the AWS ECS cluster (using Docker containers).
I want to expose one app to the world via a public IP (and I do it via the AWS load balancer) but the other one I want to be able to access only internally, so that it would not have any public IPs and would only be accessible internally.
Is this possible to do that at all? I suppose it should be easier via docker containers because I could possibly make them communicate to each other by exposing a localhost via
--network="host" in docker run
But that would work if I run the two apps on the same EC2 instance.
What if I run them on separate instances but they are using the same load balancer or — separate instances but in the same AWS zone?
What setting would I use in ECS to expose this app only via the localhost?
You can use service discovery for internal communication or Load Balancer.
I want to expose one app to the world via a public IP (and I do it via
the AWS load balancer) but the other one I want to be able to access
only internally
So you actually you are interested to run two services for example
Service A
Service B
Attach internet-facing LB with service A to make service A available for public use.
User Internal Load Balancer or service discovery to communicate with service B which only available with in-network.
What setting would I use in ECS to expose this app only via the
localhost?
There is no localhost(this container only), so to deal with question use an internal Load balancer.
What if I run them on separate instances but they are using the same
load balancer or — separate instances but in the same AWS zone?
With a load balancer, you run hundred of replicas and hundred of ECS instances AWS will route and manage traffic accordingly.
My applications run on ElasticBeanstalk and communicate purely with internal services like Kinesis and DynamoDB. There is no web traffic needed? Do I need an ElasticLoadBalancer in order to scale my instances up and down. I want to add and remove instances purely based on some cloudwatch metrics? Do I need the ELB to do managed updates etc.?
If there is no traffic to the service then there is no need to have a load balancer.
In fact the load balancer is primarily to distribute inbound traffic such as web requests.
Autoscaling can still be accomplished without a load balancer with scaling based on the CloudWatch metric that you want to use. In fact this is generally how consumer based applications tend to work.
To create this without a load balancer you would want to configure you environment as a worker environment.
#Chris already anwsered, but I would like to complement his answer for the following:
There is no web traffic needed?
Even if you communicate with Kinesis and DynamoDB only, your instances still need to be able to access internet to communicate with the AWS services. So the web traffic is required from your instances. The direct inbound traffic to your instances is not needed.
To fully separate your EB env from the internet you should have a look at the following:
Using Elastic Beanstalk with Amazon VPC
The document describes what you can do and want can't be done when using private subnets.
I'm trying to setup a cluster with several different tasks that need to be able to communicate with each other. I have turned on Service Discovery for each task and I see all of the Route 53 DNS entries in my PrivateHosted Zone get updated as I spin up new tasks, but for whatever reason when I try to use the domain name of a service (wordpress.local) My other container cannot resolve it. They are all on the same availability zone and the same subnet. I'm not totally certain what else I need to do in order to get these tasks to be able to communicate with each other aside from setting up a target group in my load balancer which seems unnecessary as I have Service Discovery turned on...
Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters?
Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service.
However it would be preferable to create the LB automatically, like Kubernetes does for an Ingress. One way to do this is with Cluster Federation, which has support for Federated Ingress.
Try kubemci for some help in getting this setup. GKE does not currently support or recommend Kubernetes cluster federation.
From their docs:
kubemci allows users to manage multicluster ingresses without having to enroll all the clusters in a federation first. This relieves them of the overhead of managing a federation control plane in exchange for having to run the kubemci command explicitly each time they want to add or remove a cluster.
Also since kubemci creates GCE resources (backend services, health checks, forwarding rules, etc) itself, it does not have the same problem of ingress controllers in each cluster competing with each other to program similar resources.
See https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress