AWS has a new interface for ECS. The scheduled tasks tab is not there anymore. Does anyone have idea where they went (without searching for that in the top bar)?
I don't think it's available in the new console yet. You can use the toggle in the top left to go to the "classic" console and edit scheduled tasks there.
Related
I am deploying a pipeline to Google Cloud DataFlow using Apache Beam. When I want to deploy a change to the pipeline, I drain the running pipeline and redeploy it. I would like to make this faster. It appears from the logs that on each deploy DataFlow builds up new worker nodes from scratch: I see Linux boot messages going by.
Is it possible to drain the pipeline without tearing down the worker nodes so the next deployment can reuse them?
rewriting Inigo's answer here:
Answering the original question, no, there's no way to do that. Updating should be the way to go. I was not aware it was marked as experimental (probably we should change that), but the update approach has not changed in the last 3 i have been using DF. About the special cases of update not working, supposing your feature existed, the workers would still need the new code, so no really much to save, and update should work in most of the other cases.
We deploy a Docker image that runs a simple Sinatra API to an ECS Fargate service. Right now our task definition defines the image using a :production tag. We want to use CodeDeploy for a blue/green deployment.
When code is changed - should we push a new image with the :production tag and force a new deployment on our service or instead use specific tags in our task definition (e.g. :97b9d390d869874c35c325632af0fc1c08e013cd) and create a new task revision then update our service to use this new task revision?
Our concern with the second approach is that we don't see any lifecycle rules around task revisions so will they just build up until we have tens/hundreds of thousands?
If we use the first approach, will CodeDeploy be able to roll back a failed deployment in the case there is an issue?
Short answer
In both cases there are no definition roll back if somehow your new image crashed but your current old task should still be alive. But if you are using health check and the current running task is below the required amount (might be due to overflow of user traffic,...etc), Fargate would start up new task with the latest task definition revision which contained the bad image.
Long answer
Since you are just asking CodeDeploy to start up task based on your image, it would create a new task definition that have your image's URI to pull the correct image. And that new task definition would always be used to start up new Fargate task.
So when Fargate found that it needs to create task, it would always try to use the latest revision which would always be the one with bad image.
The good thing is that your old image task if works correctly, it should still be alive, since the minimum running task would be 1 and since the other task is failing continuously, your old image task would not be decommissioned.
You can however overcome this by adding a CloudWatch event to trigger a lambda that either update new task revision with the good image tag or running current Fargate with the previous task definition revision. Here is an article from AWS about this: https://aws.amazon.com/blogs/compute/automating-rollback-of-failed-amazon-ecs-deployments/
A bit more on how Fargate deployment work here and help your old task running when new deployment fail, it would first provision the new task, when all the new tasks is running good, it would decommission old task. So in case the new tasks does not run properly, old task should still be alive.
I have a use case where I schedule a task 24h into the future after an event occurs. This task represents some sort of "deadline" for other things to happen.
The scheduled task triggers a creation of a report. If not all of the above mentioned "other things" have completed by this time, then the triggered report creation process creates it anyways with the information it has at the time.
If, on the other hand, all other things do complete before these 24h, then ideally I'd like to re-use the same Google Cloud Task to trigger the same process (as it's identical as the previous case but will contain all of the information possible).
I would imagine the easiest way to achieve the above is to:
schedule a task 24h into the future
if all information arrives: run the task early before it's scheduled time
However, reading through the Google Cloud Tasks documentation I don't see the option to run the task early. However, that feature does exist on the Cloud Tasks console, so I was wondering if it is available in the documentation and client libraries.
Thanks!
This is probably what you're looking for
https://cloud.google.com/tasks/docs/reference/rest/v2/projects.locations.queues.tasks/run
NOTE: It does say however that "This command is meant to be used for manual debugging"
Applying cf push on existing Running application, stops and starts the application instance with new artifact.
This application has route name assigned.
1) In order to assess the downtime for a banking app, amidst cf push, what are the steps involved from stopping existing application instance to starting new application instance?
2) Does Blue-Green deployment decrease the downtime?
The steps done in a cf push are explained in the flowchart shown here This includes
creation of application
upload
staging
Blue green deployment eliminates downtimes caused by pushes of new application versions.
The basic workflow is described pretty well in the documentation. The basic idea is to deploy the new version side by side to the old one, assigning the route of the application to both of them, the remove the route from the old version. This way, there is no unavailability for the application route.
There is at least the CF CLI plugin blue-green-deploy, which will help you automate this workflow, so you do not have to take care of the single steps.
I'm setting up a Continuous Delivery pipeline for my team with Jenkins. As a final step, we want to deploy to AWS.
I came across this while searching: :
The last step is a button where you can click to trigger deploying. Very nice! However, I searched throw Jenkins plugins page but I don't think it is there (or it is under a vague name).
Any ideas what it could be?
I'm not sure about the specific plugin you are looking for, but there is a Jenkins plugin for CodeDeploy, which can automatically create a deployment as a post-build action. See: https://github.com/awslabs/aws-codedeploy-plugin
It really depends on how what kind of requirements you have on the actual deployment procedure. One thing to keep in mind if you do infrastructure as code to setup your pipelines automatically (e.g. through JobDSL or Jenkins Job Builder), is that the particular plugins must be supported. For that reason it some times might be more convenient to just script your deployments instead of relying on plugins. I've implemented multiple deployment jobs from Jenkins to AWS by just using plain AWS CLI commands, e.g. triggering Cloudformation creation/updates.
It turns out that there is a button to trigger an operation in the plugin. It was hard to detect as the UI of the plugin is redesigned and the button became smaller.