AWS Batch Trigger a spring boot microservices - amazon-web-services

I have below 2 requirements can you please help share any suggestion
AWS Batch trigger a ECR Fargate ( On demand )
AWS Batch trigger a Spring App deployed in ECR ( running permanently)
SO here Option 1, I need to start a spring boot app which should start in ECR Fargate. this I understood from AWS batch we can specify the Cluster of the Fargate so that when the AWS batch run the app will get started.
For Option 2, I have a spring boot App deployed in ECR Fargate and it will be running, and inside spring batch is there, Now AWS batch need to trigger the spring batch. it is possible if so can you please share the implementation sample.
Also from my client App or program I need to update the AWS batch, saying the job is success or failure. can you share any sample for those as well.

AWS Batch only executes ECS tasks not ECS services. For option 1 - to launch a container (your app that does the work you want) within ECS Fargate, you would need to specify an AWS Batch compute environment as Fargate, a job queue that references the compute environment, and a job definition of the task (what container to run, what command to send, what CPU and memory resources are required). See the Learn AWS Batch workshop or the AWS Batch Getting Started documentation for more information on this.
For option 2 - AWS Batch and Spring Batch are orthogonal solutions. You should just call the Spring Batch API endpoint directly OR rely on AWS Batch. Using both is not recommended unless this is something you don't have control over.
But to answer your question - calling an non-AWS API endpoint is handled in your container and application code. AWS Batch does not prevent this but you would need to make sure that the container has secure access to the proper credentials to call the Spring boot app. Once your Batch job calls the API you have two choices:
Immediately exit and track the status of the spring batch operations elsewhere (i.e. The Batch job is to call the API and SUCCESS = "able to send the API request successfully, FAIL = "not able to call the API" )
Call the API, then enter a loop where you poll the status of the Spring batch job until it completes successfully or not, exiting the AWS Batch job with the same state as the Spring batch job did.

Related

Migrating on-premises Python ETL scripts that feed a Splunk Forwarder from a syslog box to AWS?

I've been asked to migrate on-premises Python ETL scripts that live on a syslog box over to AWS. These scripts run as cron-jobs and output logs that a Splunk Forwarder parses and sends to our Splunk instance for indexing.
My initial idea was to deploy a Cloudwatch-triggered Lambda function that spins up an EC2 instance, runs the ETL scripts cloned to that instance (30 minutes), and then brings down the instance. Another idea was to containerize the scripts and run them as task definitions. They take approximately 30 minutes to run.
Any help moving forward would be nice; I would like to deploy this in IaaC, preferably in troposphere/boto3.
Another idea was to containerize the scripts and run them as task definitions
This is probably the best approach. You can include the splunk universal forwarder container in your task definition (ensuring both containers are configured to mount the same storage where the logs are held) to get the logs into splunk. You can schedule task execution just like lambda functions or similar. Alternatively to the forwarder container, if you can configure the logs to output to stdout/stderr instead of log files, you can just setup your docker log driver to output directly to splunk.
Assuming you don't already have a cluster with capacity to run the task, you can use a capacity provider for the ASG attached to the ECS cluster to automatically provision instances into the cluster whenever the task needs to run (and scale down after the task completes).
Or use Fargate tasks with EFS storage and you don't have to worry about cluster provisioning at all.

AWS ECS: Is it possible to make a scheduled ecs task accessible via ALB?

my current ECS infrastructure works as follows: ALB -> ECS Fargate --> ECS service -> ECS task.
Now I would like to replace the normal ECS task with a Scheduled ECS task. But nowhere do I find a way to connect the Scheduled ECS task to the service and thus make it accessible via the ALB. Isn't that possible?
Thanks in advance for answers.
A scheduled task is really more for something that runs to complete a given task and then exits.
If you want to connect your ECS task to a load balancer you should run it as part of a Service. ECS will handle connecting the task to the load balancer for you when it runs as a Service.
You mentioned in comments that your end goal is to run a dev environment for a specific time each day. You can do this with an ECS service and scheduled auto-scaling. This feature isn't available through the AWS Web console for some reason, but you can configure it via the AWS CLI or one of the AWS SDKs. You would configure it to scale to 0 during the time you don't want your app running, and scale up to 1 or more during the time you do want it running.
A scheduled ECS task is it a one-off task launched with the RunTask API and that has no ties to an ALB (because it's not part of the ECS service). You could probably make this work but you'd probably need to build the wiring yourself by finding out the details of the task and adding it to the target group. I believe what you need to do (if you want ECS to deal with the wiring) is to schedule a Lambda that increments the desired number of tasks in the service. I am also wondering what the use case is for this (as maybe there are other ways to achieve it). Scheduled tasks are usually batch jobs of some sort and not web services that need to get wired to a load balancer. What is the scenario / end goal you have?
UPDATE: I missed the non-UI support for scheduling the desired number of tasks so the Lambda isn't really needed.

Best solution to deploy spring batch application on AWS cloud env

I am developing a Spring batch application and want to deploy on AWS env to ensure minimal usage of resources and as soon as the batch completes the resources should be terminated.
Please suggest me the best service for that in AWS. My job will take 1-2 hr to run.
Edit: AWS batch seems right option but not sure how inter node communication will happen as Spring batch using messaging middleware for inter node communication but AWS batch suggest to use IP based MPI, Apache MXnet etc for the same purpose

Running Scripts or Commands in Spinnaker Pipelines

I'm trying to run scripts as part of some of my deployment pipelines in Spinnaker. I don't want to use Jenkins to run these scripts. I would use a Kubernetes job, but these scripts need to execute prior to the Kubernetes deployment.
I was debating creating ECS tasks in AWS which I'd like to run on demand during one of the stages in my pipeline. Does anyone know if it's possible to execute an ECS task directly from Spinnaker?
If not, are there any other ways to execute a command or script in a pipeline outside of using a Kubernetes job or Jenkins server?
One way to do this is to use the Run Job ( Manifest ) stage and just point it to another kubernetes cluster for this. This approach gives you a bit of flexibility since you can monitor the pipeline stage for completion status.
You can also just create an arbitrary API endpoint and trigger via a webhook stage that monitors for completion and use whatever your preferred script execution environment (i.e, Lambda, ECS etc) behind that api endpoint.

AWS-ECS Task Services restart automation

Currently we are running application with serverless architecture microservices using AWS ECS, whenever we deployed or update new artifacts on ECR, we need to restart the services by changing the tasks from 0 to 1 vice versa to restart the service and pick-up the new artifacts. As we know this process is very manual and taking some steps to accomplish, I want to automate this, is it possible to use AWS-lambda or cloudwatch? or any configuration as long as to skip the manual process. What kind of code and language and example of automation do I need to achieve this?
Take a look at ecs-deploy script. Basically it will replace an existing service with a latest (or specific) image from ECR. So if you have automation to update ECR with the latest image this script will deploy that image to ECS
A setup which could work if you have a CI/CD pipeline is upon uploading to ECR, trigger a lambda which resets that corresponding service. Supplying any variables to the lambda such as ECR tag to pull or service name.
ECS has an option to restart services with ForceNewDeployment. In Python the call would look like.
updateTrigger = client.update_service(
cluster = myClusterName,
service = serviceToUpdate,
desiredCount = 1,
forceNewDeployment=True
)
From https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecs.html#ECS.Client.update_service