I already have a ECS Cluster deployed in multiple EC2. Now I want to integrate them with X-Ray for troubleshooting some issues.
Is there a way to do this without re-deploy cluster?
On another side, I using the start.sh and the Dockerfile in Can I run aws-xray on the same ECS container? to generate a tomcat container with xray inside but the XRay console is still empty after open port 2000 TCP and UDP in role, open all in NACL. Do I miss anything?
Getting X-Ray to work with your application has two parts: (1) Instrumenting your application with an AWS X-Ray SDK. Doing this will generate trace data for your application. (2) Running the daemon process so that the SDK can send the trace data to the X-Ray service to be displayed in the console. You have done only the 2nd part. Once you instrument your application, you'd need to redeploy it in the cluster.
Related
I have below 2 requirements can you please help share any suggestion
AWS Batch trigger a ECR Fargate ( On demand )
AWS Batch trigger a Spring App deployed in ECR ( running permanently)
SO here Option 1, I need to start a spring boot app which should start in ECR Fargate. this I understood from AWS batch we can specify the Cluster of the Fargate so that when the AWS batch run the app will get started.
For Option 2, I have a spring boot App deployed in ECR Fargate and it will be running, and inside spring batch is there, Now AWS batch need to trigger the spring batch. it is possible if so can you please share the implementation sample.
Also from my client App or program I need to update the AWS batch, saying the job is success or failure. can you share any sample for those as well.
AWS Batch only executes ECS tasks not ECS services. For option 1 - to launch a container (your app that does the work you want) within ECS Fargate, you would need to specify an AWS Batch compute environment as Fargate, a job queue that references the compute environment, and a job definition of the task (what container to run, what command to send, what CPU and memory resources are required). See the Learn AWS Batch workshop or the AWS Batch Getting Started documentation for more information on this.
For option 2 - AWS Batch and Spring Batch are orthogonal solutions. You should just call the Spring Batch API endpoint directly OR rely on AWS Batch. Using both is not recommended unless this is something you don't have control over.
But to answer your question - calling an non-AWS API endpoint is handled in your container and application code. AWS Batch does not prevent this but you would need to make sure that the container has secure access to the proper credentials to call the Spring boot app. Once your Batch job calls the API you have two choices:
Immediately exit and track the status of the spring batch operations elsewhere (i.e. The Batch job is to call the API and SUCCESS = "able to send the API request successfully, FAIL = "not able to call the API" )
Call the API, then enter a loop where you poll the status of the Spring batch job until it completes successfully or not, exiting the AWS Batch job with the same state as the Spring batch job did.
We are using AWS Elastic Beanstalk for deploying application. Currently we have two Elastic Beanstalk applications and two worker processes (that pick message from AWS SQS Queue and process it).
What can be the best tools to view the combine logs from the Elastic Beanstalk application and worker and a few more on-premise applications in future?
Throw the logs in AWS ElasticSearch and the use Kibana, which comes with ElasticSearch, to visualize them.
I used the suggestion and configured Cloud watch logs, Elastic Search, and Kibana; but i am not getting all logs and all insights. I can see httpd access & error logs, ebs access & error logs. It also seems lot of AWS services and configuration. Since I am very new to AWS; therefore, so I am facing trouble in setting things up
Alternatively as suggested by my boss, I tried "New relic" - It was very simple to configure and I can see lot of insights of my EBS application in "New Relic" console. I can also configure my Browser, iOS app, Android app, AWS infrastructure (AWS Services) in one New Relic console. Some details are missing in New Relic console such error stack trace, request params in POST request, and so on; But I also don't want to share such details with New Relic, so, that is ok.
I will use "New Relic" and Cloudwatch logs (for real time investigation into failing HTTP REST services) right now; but I will explore more options inside AWS: Elastic Search and Kibana
Many Thanks
I'm trying to deploy backend application to the AWS Fargate using cloudformation templates that I found. When I was using the docker image training/webapp I was able to successfully deploy it and access with the externalUrl from the networking stack for the app.
When I try to deploy our backend image I can see the stacks are deploying correctly but when I try to go to the externalUrl I get 503 Service Temporarily Unavailable and I'm unable to see it... Another thing that I've noticed is on the docker hub I can see that the image is continuously pulled all the time when the cloudformation services are running...
The backend is some kind of maven project I don't know exactly what but I know that locally its working but to get it up running the container with this backend image takes about 8 minutes... I'm not sure if this affects the Fargate ?? Any Idea how to get it working ?
It sounds like you need to find the actual error that you're experiencing, the 503 isn't enough information. Can you provide some other context?
I'm not familiar with fargate but have been using ecs quite a bit this year and I generally would find that by going to (on the dashboard) ecs -> cluster -> service -> events. The events tab gives more specific errors as to what is happening.
My ecs deployment problems are generally summarized into
the container is not exposing the same port as is in the definition, this could be the case if you're deploying from a stack written by someone else.
the task definition memory/cpu restrictions don't grant enough space for the application and it has trouble placing (probably a problem with ecs more than fargate but you never know.)
Your timeout in the task definition is not set to 8 minutes: see this question, it has a lot of this covered
Your start command in the task definition does not work as expected with the container you're trying to deploy
If it is pulling from docker hub continuously my bet would be that it's 1, 3 or 4, and it's attempting to pull the image over and over again.
Try adding a Health check grace period of 60 by going to ECS -> cluster -> service -> update Network Access section.
I have a classic scala app, it produces three different logs in the location
/var/log/myapp/log1/mylog.log
/var/log/myapp/log2/another.log
/var/log/myapp/log3/anotherone.log
I containerized the app and working fine, I can get those logs by docker volume mount.
Now the app/container will be deployed in AWS ECS with auto scaling group. in this case multiple container may run on one single ecs host.
I would like to use cloud watch to monitor my application logs.
One solution could be put aws log agent inside my application container.
Is there any better way to get those application logs from container to cloudwatch log.
help is very much appreciated.
When using docker, the recommended approach is to not log to files, but to send logs to stdout and stderr. Doing so prevents the logs from being written to the container's filesystem, and (depending on the logging driver in use), allows you to view the logs using the docker logs / docker container logs subcommand.
Many applications have a configuration option to log to stdout/stderr, but if that's not an option, you can create a symlink to redirect output; for example, the official NGINX image on Docker Hub uses this approach.
Docker supports logging drivers, which allow you to send logging to (among others) AWS cloud watch. After you modified your image to make it log to stdout/stderr, your can configure the AWS logging driver.
More information about logging in Docker can be found in the "logging" section in the documentation
You don't need log agent if you can change the code.
You can directly publish Custom Metric Data into ColudWatch like this page said: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-cloudwatch-publish-custom-metrics.html
I created 2 applications with spring boot. I deployed them on Amazon Elastic Beanstalk. One is deployed in a Java environment, the other one in Tomcat.
Tomcat has its catalina.out log, where I can find the logs written by my spring application with log4j. The Java application has a log web-1.log, but it is rolled every hour, and I can only find the last 5 logs.
Is there a better way to log, or to store the old logs (maybe on S3), or to change the retention policy?
You can apply log rotation to S3. You also have the elk stack option but requires effort.
If you want a more aws solution you can utilize cloudwatch. For example you set up your logger with a custom appender that sends logs back to cloudwatch.
By using cloudwatch you can have a more friendly way to check your logs.