I used the cloud formation template provided by Docker for AWS setup & prerequisites to set up a docker swarm.
I created a REST service using Tibco BusinessWorks Container Edition and deployed it into the swarm by creating a docker service.
docker service create --name aka-swarm-demo --publish 8087:8085 akamatibco/docker_swarm_demo:part1
The service starts successfully but the CloudWatch logs show the below exception:
I have tried passing the JVM environment variable in the Dockerfile as :
ENV JAVA_OPTS= "-Dbw.rest.docApi.port=7778"
but it doesn't help.
The interesting fact is at the end the log says:
com.tibco.thor.frwk.Application - TIBCO-THOR-FRWK-300006: Started BW Application [SFDemo:1.0]
So I tried to access the application using CURL -
curl -X GET --header 'Accept: application/json' 'URL of AWS load balancer : port which I exposed while creating the service/resource URI'
But I am getting the below message:
The REST service works fine when I do docker run.
I have checked the Security Groups of the manager and load-balancer. The load-balancer has inbound open to all traffic and for the manager I opened HTTP connections.
I am not able to figure out if anything I have missed. Can anyone please help ?
As mentioned in Deploy services to swarm, if you read along, you will find the following:
PUBLISH A SERVICE’S PORTS DIRECTLY ON THE SWARM NODE
Using the routing mesh may not be the right choice for your application if you need to make routing decisions based on application state or you need total control of the process for routing requests to your service’s tasks. To publish a service’s port directly on the node where it is running, use the mode=host option to the --publish flag.
Note: If you publish a service’s ports directly on the swarm node using mode=host and also set published= this creates an implicit limitation that you can only run one task for that service on a given swarm node. In addition, if you use mode=host and you do not use the --mode=global flag on docker service create, it will be difficult to know which nodes are running the service in order to route work to them.
Publishing ports for services works different than for regular containers. The problem was; the image does not expose the port after running service create --publish and hence the swarm routing layer cannot reach the REST service. To resolve this use mode = host.
So I used the below command to create a service:
docker service create --name tuesday --publish mode=host,target=8085,published=8087 akamatibco/docker_swarm_demo:part1
Which eventually removed the exception.
Also make sure to configure the firewall settings of your load balancer so as to allow communications through desired protocols in order to access your applications deployed inside the container.
For my case it was HTTP protocol, enabling port 8087 on load balancer which served the purpose.
Related
I have a dockerized Node.JS express application that I am migrating to AWS from Google Cloud. I had done this before successfully on the same project before deciding Cloud Run was more cost effective because of their free tier. Now, wanting to switch back to Fargate, but am unable to do it again due what I'm guessing is a crucial step. For minimal setup, I used the following guide: https://docs.docker.com/cloud/ecs-integration/ Essentially, using docker compose up with aws context and project name to deploy to ECS and Fargate.
The Load Balancer gives me a public DNS name in the format: xxxxx.elb.us-west-2.amazonaws.com and I have defined a port of 5002 in my Docker container. I know the issue is not related to exposing port numbers or any code-related issue since I had this successfully running in Google Cloud Run. When I try to hit any of my express endpoints, by sending POST to xxxxx.elb.us-west-2.amazonaws.com:5002/my_endpoint, I end up with Error: Request Timed Out
Note: I have already verified that my inbound security rules have been set to all traffic.
I am very new to AWS, so would love guidance if I am missing a critical step.
Thanks!
EDIT (SOLUTION): Turns out everything was deploying correctly, but after checking CloudWatch Logs, it turns out Fargate can't read environment variables defined inside of docker-compose file. Instead, they need to be defined in .env files, then read in docker-compose through -env-file flag. My code was then trying to listen on a port that was in environment variable but was undefined, so was receiving the below error in CloudWatch.
I have dockersied our webapp into two different docker files, such as:
1.Frontend
2.Backend
Both docker apps have their own .env files , i have to defined deployed server Ip address to connect them.
In frontend-env : configure backend ip
but deploying in ECS service, each container will be different ip ,how to solve this issue scale out and still connect the service each other.
so far :
Create spreate ecs cluster for both frontend and backend, with ALB.
Give the ALB in env files to connect them or hit the api.
any other solutions for this deployment?
You should be using Service Discovery to achieve this. You can read the announcement blog here. In a nutshell the way it works is that if you have two ECS services frontend and backend you want to expose frontend with an ALB for public access but enabling service discovery you will be able to have all tasks that belong to frontend to be able to connect to the tasks that belong to backend by calling backend.<domain> (where is the SD namespace you defined). When you do so ECS Service Discovery will resolve backend.<domain> to the private IP addresses of the tasks in backend thus eliminating the need for a load balancer in front of it.
If you want a practical example of how this works you can investigate this basic demo app:
When developing locally, I run docker-compose, where I have two services Service1 and Service2. Service2 depends on Service1. When I deploy them to ECS, I create them within one task definition and provide JSON array of container definitions to spin them up.
When I run them locally, within docker-compose, from Service2 I can call http://Service1:8080/v1/graphql (since they're in docker-compose together I can call it by the service name) ... however, when I deploy to ECS and I make that same API call, I get a 404.
Based on this: Docker links with awsvpc network mode I've also tried http://localhost:8080/v1/graphql ... I'd appreciate any help!
I'd try service discovery as mentioned here:
Amazon ECS now includes integrated service discovery. This makes it possible for an ECS service to automatically register itself with a predictable and friendly DNS name in Amazon Route 53. As your services scale up or down in response to load or container health, the Route 53 hosted zone is kept up to date, allowing other services to lookup where they need to make connections based on the state of each service.
See an example here.
I have currently deployed a web server via aws ecs cli compose service up cli command, and I further registered a domain in route 53 service, register a certificate through Amazon Certificate Manager. Making use of the ALB (application load balancer), I am able to perform dynamic port mapping and https for my web application, but here is the problem.
Using docker compose as the blueprint for my web application, which consists of 3 containers, frontend, loopback and database (mongo), my frontend container's dynamic port mapping and https are up and running fine
However the problem comes to the loopback container, there are chances frontend needs to fetch something via loopback API server (which makes use of 3002 port), but the loopback container does not is not have configured in https which causes the error below when calling the API.
Through ecs cli compose service up command, I can configure the target group to allow elb to forward the request to frontend container (using --target-group-arn, --container-name and --container-port attributes to specify the frontend container with the specific target group), but this command seems unable to map the 2nd target group to my loopback container. Reading https://docs.aws.amazon.com/AmazonECS/latest/developerguide/register-multiple-targetgroups.html which seems to allow the possibility of multiple target groups for a service, but I cannot figure out how to use create service command to link up my docker containers without using user ecs cli compose service up command.
Is there a way to
Use ecs cli compose service up command to register multiple target groups on my containers?
Apply https also on my loopback URL (which domain name is myDomain.com:3002)?
======================================================
Follow-up tasks
Created 2 target groups
Configured rules and listeners
Knowing ecs cli service up cannot register multiple groups, I tried to do via console, still only 1 container can be registered
Thanks and appreciate for all helps
As far your question is a concern, it possible to perform that using AWS console, but ecs cli currently does not support multiple target group at the moment.
you can check this ecs-cli compose service up with a load balancer also consider this amazon-ecs-cli-register-service.
The second error occurred when the frontend application tries to use load mix HTTP and https resources. you can look into the error, there can be static or API calls that are based on HTTP, convert all these calls to HTTPS then it should work fine. you can check error seems like static file loading from http site.
Once you applied HTTPS it should point to https://example.com or https://api.example.com, the port is not required with HTTPs call if its bind with standard HTTPS port.
Update:
ALB target group route traffic base on the target group, so the target group contain the desired container. adding screenshot to make it more clear.
ecs cli compose service up contains a parameter --target-groups allowing you to add multiple target groups at once.
ecs-cli compose --file "../../src/docker-compose.yml" `
--ecs-params "../../src/ecs-params.yml" `
--project-name xxxxx service up `
--target-groups "targetGroupArn=arn:aws:elasticloadbalancing:eu-west-3:xxxxx:targetgroup/xxxx_tg1,containerPort=80,containerName=webapi" `
--target-groups "targetGroupArn=arn:aws:elasticloadbalancing:eu-west-3:xxxxx:targetgroup/xxxx_tg2,containerPort=81,containerName=webapi2" `
--cluster-config myconfig`
--ecs-profile myprofile
ecs-cli compose service up documentation
I have REST API Java application and want move it to cloud.
But I don't understand which tutorial use.
I already have docker image in Container Registry made by Jib and want connect it with some cloud database (Cloud SQL/Spanner).
How change this connection props to cloud?
db.driver=com.mysql.cj.jdbc.Driver
db.url=jdbc:mysql://localhost:3306/db
db.username=usrname
db.password=pswd
db.entity.package = com.example.model
We do this through Cloud SQL Proxy docker image: https://cloud.google.com/sql/docs/mysql/connect-docker
Enable the Cloud SQL Admin API.
Install the mysql client on the Compute Engine instance or client machine, if it is not already installed.
If needed, install the Docker client
4.Install the Proxy Docker image from the Google Container Registry.
If you are running the Proxy Docker image on a local machine (not a Compute Engine instance), or your Compute Engine instance does not have the proper scopes, create a Google Cloud Platform service account.
Go to the Cloud SQL Instances page in the Google Cloud Console.
Select the instance to open its Instance details page and copy the Instance connection name.
Start the proxy.
docker run -d \
-v <PATH_TO_KEY_FILE>:/config \
-p 127.0.0.1:3306:3306 \
gcr.io/cloudsql-docker/gce-proxy:1.16 /cloud_sql_proxy \
-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:3306 - credential_file=/config
Start the client
mysql -u <USERNAME> -p --host 127.0.0.1
Then connect using
db.driver=com.mysql.cj.jdbc.Driver
db.url=jdbc:mysql://127.0.0.1:3306/db
db.username=usrname
db.password=pswd
db.entity.package = com.example.model
If you want to reach a CloudSQL database from a GKE cluster, you have 2 solutions:
You can use configure a private ip on CloudSQL and then reach it directly to this IP. For this, your GKE cluster must be configured as VPC native.
You can attach a sidecar to your main container which open a cloud sql proxy connection to your database. This solution is quite similar of the answer of #ParthMehta. Here the description (and the github example) of this sidecar configuration
For Spanner, it's different because you can't use private IP or cloud SQL proxy binary. you have details on this page for the configuration and the dependencies
As you can see, you connect your instance directly with the ressource definition (/projects/..../instance/.......). Your config file should look like to this:
db.driver=com.google.cloud.spanner.jdbc.JdbcDriver
db.url=jdbc:cloudspanner:/projects/{YOUR_PROJECT_ID}/instances/{YOUR_INSTANCE_ID}/databases/{YOUR_DATABASE_ID}
db.dialect=com.google.cloud.spanner.hibernate.SpannerDialect