how to deploy backend and frontend app in ecs? - amazon-web-services

I have dockersied our webapp into two different docker files, such as:
1.Frontend
2.Backend
Both docker apps have their own .env files , i have to defined deployed server Ip address to connect them.
In frontend-env : configure backend ip
but deploying in ECS service, each container will be different ip ,how to solve this issue scale out and still connect the service each other.
so far :
Create spreate ecs cluster for both frontend and backend, with ALB.
Give the ALB in env files to connect them or hit the api.
any other solutions for this deployment?

You should be using Service Discovery to achieve this. You can read the announcement blog here. In a nutshell the way it works is that if you have two ECS services frontend and backend you want to expose frontend with an ALB for public access but enabling service discovery you will be able to have all tasks that belong to frontend to be able to connect to the tasks that belong to backend by calling backend.<domain> (where is the SD namespace you defined). When you do so ECS Service Discovery will resolve backend.<domain> to the private IP addresses of the tasks in backend thus eliminating the need for a load balancer in front of it.
If you want a practical example of how this works you can investigate this basic demo app:

Related

Fargate - How do I make an API call to another container within the same task definition

When developing locally, I run docker-compose, where I have two services Service1 and Service2. Service2 depends on Service1. When I deploy them to ECS, I create them within one task definition and provide JSON array of container definitions to spin them up.
When I run them locally, within docker-compose, from Service2 I can call http://Service1:8080/v1/graphql (since they're in docker-compose together I can call it by the service name) ... however, when I deploy to ECS and I make that same API call, I get a 404.
Based on this: Docker links with awsvpc network mode I've also tried http://localhost:8080/v1/graphql ... I'd appreciate any help!
I'd try service discovery as mentioned here:
Amazon ECS now includes integrated service discovery. This makes it possible for an ECS service to automatically register itself with a predictable and friendly DNS name in Amazon Route 53. As your services scale up or down in response to load or container health, the Route 53 hosted zone is kept up to date, allowing other services to lookup where they need to make connections based on the state of each service.
See an example here.

Run multiple servers with interconnection on Amazon AWS

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.
Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

Cannot connect frontend app{Angular} to Backend{SpringBoot} in kubernetes

I am trying to containerize my angular+java app in Kubernetes cluster. I have a frontend deployment and a backend deployment in my k8 cluster. My database is in AWS{RDS}. But i am confused that what API-URL should i give in my Frontend code so that it can get connected to my backend app in k8 cluster.
For e.g :-
In local system i use something like {localhost:8080/api/customers} in my Frontend code but what should i change it to at the time of deploying in Kubernetes cluster.
I have a Kubernetes cluster setup with 1 master and 2 slave nodes, I created a deployment of my backend app and exposed it through Cluster Ip, and than i gave this cluster ip and port in my frontend application.
After that i pushed the image to docker hub and than created a k8 deployment for it, but still its not working.
My main ask is what URL and Port should i mention in my Frontend application target URL so that it can find hit my java APIs.
The front end angular application is running inside the browser of a user. This is outside of the kubernetes Cluster and you therefore can not use the kubernetes Service Name as api endpoint.
You need to make the spring boot api accessible from outside of kubernetes, usually using an ingress or load balancer. You use this external ip or host name as api url in the angular application.
if your two applications run in the same kubernetes cluster so you would have to call your backend service like this: svcname:port for example
http://login:8080/login
This assuming the pods for your frontend are on the same Kubernetes namespace. If they are on a different namespace you would call something like this:
http://login.<namespace>.svc.cluster.local:5555/login
Exposing my back-end service to a Load Balancer, and than using that Load Balancer endpoint in my Front-end application worked for me.

How to make frontend application talk to backend applications without creating ingress for the backend

I have deployed a kubernetes cluster using kops. The current cluster uses an nginx ingress controller which creates a classic load balancer in AWS. I have some backend applications that talk to the frontend application and some backend services that just talk to each other. The problem is that that the only way currently to make the frontend app talk to the backend apps is by creating an ingress for the backend apps since the frontend sends requests via the domain name since it doesn't understand the internal service names. For backends, it is fine since they can talk internally just by using the service name and their respective port. How can I achieve this without having to create ingress for backends. Is it possible to do that using an Application load balancer or do I need to have an API gateway for that? How do I achieve this architecture? Adding an architecture diagram to show what I want to achieve. Any help is appreciated.
From your "architecture diagramm" it looks like all your applications are within the cluster. So no need for ingress. You can just use kubernetes services.
Your frontend app should be able to call the endpoints of the backend services otherwise you made something wrong in the configuration of the frontend service.
If you have no chance to change the URL which the frontend app calls for backend services, you can use for example a kubernetes service with CNAME and redirect to your internal services.
You dont need ingress to connect backend from frontend.
assuming both backend and frontend pods are running in the same kubernetes cluster. frontend service can connect backend service using service dns
backend-service.<namespace>.svc.cluster.local

Kubernetes front end deployment timing out when requesting api deployment

Let me start this by saying I am fairly new to k8s. I'm using kops on aws.
I currently have 3 deployments on a cluster.
FrontEnd nginx image serving an angular web app. One pod. External service.
socket.io server. Internal service. (this is a chat application, and we decided to separate this server from our api. Was this a good idea?)
API that is requested by both the socket.io server and the web application. Internal Service (should it be external?)
The socket.io deployment and API seem to be able to communicate through the cluster ips and corresponding services I have set up for the deployments; however, the webapp times out when querying the API.
From the web app, I am querying the API using the API's cluster IP address. Should I be requesting a different address?
Additionally, what is the best way to configure these addresses in my files without having to change the addresses in the files each time I create a new deployment? (the cluster ip addresses change every time you tare down and recreate the deployment)
If I understood correctly your frontend web application depends on API server, so that it sends requests to it. In such case, your API service should be available from outside of the cluster. It means it should be exposed as the NodePort or LoadBalancer service type.
P.S. you can refer to service using ClusterIP only inside of the cluster.