Starting AWS DMS Replication Task in Terraform - amazon-web-services

Is there any way to start an AWS Database Migration Service full-load-and-cdc replication task through Terraform? Preferably, this would start automatically upon creation of the task.
The AWS DMS console provides an option to "Start task on create", and the AWS CLI provides a start-replication-task command, but I'm not seeing similar options in the Terraform resources. The aws_dms_replication_task provides the cdc_start_time argument, but I think this may apply only to cdc tasks. I've tried setting this argument to a number of past/current/future timestamps with my full-load-and-cdc replication task, but the task never started (it was merely created and entered the ready state).
I'd be glad to log a feature request to Terraform if this feature is not supported, but wanted to check with the community first to see if I was overlooking an existing way to do this today.
(Note: This question has also been logged to the Terraform Google group.)

I've logged an issue for this feature request:
Terraform AWS Provider #2083: Support Starting AWS Database Migration Service Replication Task

Related

terraform : find out if AWS Service is supported in targeted region

We are using codepipeline to deploy our application on to the AWS EC2 Nodes.
However codepipeline is not supported in all the AWS Regions and causing our terraform deployment to fail.
I would like to use userdatascript on AWS EC2 nodes, where AWS Regions lacking support of AWS Codepipeline.
Is there any way for me to detect/findout if codepipeline service supported/or not on targeted region through Terraform ?
AWS provides endpoint for the codepipeline in this documentation - https://docs.aws.amazon.com/general/latest/gr/codepipeline.html
My logical/hypothetical solution here is below
Run the curl command via local-exec or use http get data source to hit the endpoints on targeted region , the endpoint follow the below pattern https://codepipeline.<InsertTargetedRegion>.amazonaws.com
From the result of the step 1, make logical decision. if endpoint is reachable, create AWS Codepipeline and downstream resources, if endpoint is not reachable, create EC2 LC with userdata script and drop the AWS Codepipeline.
The other solution ( which is little clumsy ) , I can think of is to make a terraform list for the regions which do not support codepipeline as service and make some logical decision based on that.
However this clumsy solution required human effort (checking/knowing if region support aws codepipeline and update terraform list ) and updating terraform configuration every now and then.
I am wondering, if there is any other way to know if targeted region supports codepipeline or not.
Thank You.
I think that having a static list of supported regions is simply the easiest and most direct way of knowing where the script can run. Then the logic is quite easy: if the current region is supported continue, if not error and stop. Any other logic will be cumbersome and unnecessary.
Yes, you can use a static file, but is it a scalable solution? How can you track if a new region adds. I think this link will help you.
https://aws.amazon.com/blogs/aws/new-query-for-aws-regions-endpoints-and-more-using-aws-systems-manager-parameter-store/
With AWS CLI you can query services availability with regions

AWS ECS run latest task definition

I am trying to have run the lastest task definition image built from GitHub deployment (CD). Seems like on AWS it creates a task definition for example "task-api:1", "task-api:2", on was my cluster is still running task-api: 1 even though there is the latest task as a new image has been built. So far I have to manually stop the old one and start a new one . How can I have it automated?
You must wrap your tasks in a service and use rolling updates for automated deployments.
When the rolling update (ECS) deployment type is used for your service, when a new service deployment is started the Amazon ECS service scheduler replaces the currently running tasks with new tasks.
Read: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html
This is DevOps, so you need a CI/CD pipeline that will do the rolling updates for you. Look at CodeBuild, CodeDeploy and CodePipeline (and CodeCommit if you integrate your code repository in AWS with your CI/CD)
Read: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html
This is a complex topic, but it pays off in the end.
Judging from what you have said in the comments:
I created my task via the AWS console, I am running just the task definition on its own without service plus service with task definition launched via the EC2 not target both of them, so in the task definition JSON file on my Github both repositories they are tied to a revision of a task (could that be a problem?).
It's difficult to understand exactly how you have this set up and it'd probably be a good idea for you to go back and understand the services you are using a little better using the guide you are following or AWS documentation. Pushing a new task definition does not automatically update services to use the new definition.
That said, my guess is that you need to update the service in ECS to use the latest task definition. You can do that in many ways:
Through the console (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service-console-v2.html).
Through the CLI (https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html).
Through the IaC like the CDK (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ecs-readme.html).
This can be automated but you would need to set up a process to automate it.
I would recommend reading some guides on how you could automate deployment and updates using the CDK. Amazon provide a good guide to get you started https://docs.aws.amazon.com/cdk/latest/guide/ecs_example.html.

Is it possible to run the Terrafrom/Ansible scripts without having them on a running EC2 instance?

Background:
We have several legacy applications that are running in AWS EC2 instances while we develop a new suite of applications. Our company updates their approved AMI's on a monthly basis, and requires all running instances to run the new AMI's. This forces us to regularly tear down the instances and rebuild them with the new AMI's. In order to comply with these requirements all infrastructure and application deployment must be fully automated.
Approach:
To achieve automation, I'm using Terraform to build the infrastructure and Ansible to deploy the applications. Terraform will create EC2 Instances, Security Groups, SSH Keys, Load Balancers, Route 53 records, and an Inventory file to be used by Ansible which includes the IP addresses of the created Instances. Ansible will then deploy the legacy applications to the hosts supplied by the Inventory file. I have a shell script to execute the first the Terrafrom script and then the Ansible playbooks.
Question:
To achieve full automation I need to run this process whenever an AMI is updated. The current AMI release is stored in Parameter store and Terraform can detect when there is a change, but I still need to manually trigger the job. We also have an AWS SNS topic to which I can subscribe to receive notification of new AMI releases. My initial thought was to simply put the Terraform/Ansible scripts on an EC2 instance and have a Cron job run them monthly. This would likely work, but I wonder if it is the best approach. For starters, I would need to use an EC2 instance which itself would need to be updated with new AMI's, so unless I have another process to do this I would need to do it manually. Second, although our AMI's could potentially be updated monthly, sometimes they are not. Therefore, I would sometimes be running the jobs unnecessarily. Of course I could simply somehow detect if the the AMI ID has changed and run the job accordingly, but it seems like a better approach would be to react to the AWS SNS topic.
Is it possible to run the Terrafrom/Ansible scripts without having them on a running EC2 instance? And how can I trigger the scripts in response to the SNS topic?
options i was testing to trigger ansible playbook in response to webhooks from alertmanager to have some form of self healing ( might be useful for you)
run ansible in aws lambda and have it frontend with api gaetway as webhook .. alertmanager trigger -> https://medium.com/#jacoelho/ansible-in-aws-lambda-980bb8b5791b
SNS receiver in AWS -> susbscriber-> AWS system manager - which supports ansible :
https://aws.amazon.com/blogs/mt/keeping-ansible-effortless-with-aws-systems-manager/
Alertmanager target jenkins webhook -> jenkins pipline uses ansible plugin to execute playbooks :
https://medium.com/appgambit/ansible-playbook-with-jenkins-pipeline-2846d4442a31
frontend ansible server with a webhook server which execute ansible commands as post actions
this can be flask based webserver or this git webhook provided below :
https://rubyfaby.medium.com/auto-remediation-with-prometheus-alert-manager-and-ansible-e4d7bdbb6abf
https://github.com/adnanh/webhook
you can also use AWX ( ansible tower in opensource form) which expose ansible server as a api endpoint ( webhook) - currently only webhooks supported - github and gitlab.

AWS ECS service error: Task long arn format must be enabled for launching service tasks with ECS managed tags

I have a ECS service running in a cluster which has 1 task. Upon task update, the service suddenly died with error:
'service my_service_name failed to launch a task with (error Task long arn format must be enabled for launching service tasks with ECS managed tags.)'
Current running tasks are automatically drained and the above message shows up every 6 hours in the "Events" tab of the service. Any changes done to the service config does not repair the issue. Rolling back the task update also doesn't change anything.
I believe I'm already using the long ARN format. Looking for help.
This turned out to be a AWS bug acknowledged by them now. It was supposed to manifest after Jan 1 2020 but appeared early because of a workflow fault in AWS.
The resources were created by an IAM user who was later deleted and hence the issue appears.
I simply removed the following from my task JSON input: propagateTags, enableECSManagedTags
It seems like you are Tagging Your Amazon ECS Resources but you did not opt-in to this feature so you have to opt-in and I thin you are using regular expression in deployment so if your deployment mechanism uses regular expressions to parse the old format ARNs or task IDs, this may be a breaking change.
Starting today you can opt in to a new Amazon Resource Name (ARN) and
resource ID format for Amazon ECS tasks, container instances, and
services. The new format enables the enhanced ability to tag resources
in your cluster, as well as tracking the cost of services and tasks
running in your cluster.
In most cases, you don’t need to change your system beyond opting in
to the new format. However, if your deployment mechanism uses regular
expressions to parse the old format ARNs or task IDs, this may be a
breaking change. It may also be a breaking change if you were storing
the old format ARNs and IDs in a fixed-width database field or data
structure.
After you have opted in, any new ECS services or tasks have the new
ARN and ID format. Existing resources do not receive the new format.
If you decide to opt out, any new resources that you later create then
use the old format.
You can check this AWS compute blog to migrate to new ARN.
migrating-your-amazon-ecs-deployment-to-the-new-arn-and-resource-id-format-2
Tagging Your Amazon ECS Resources
To help you manage your Amazon ECS tasks, services, task definitions,
clusters, and container instances, you can optionally assign your own
metadata to each resource in the form of tags. This topic describes
tags and shows you how to create them.
Important
To use this feature, it requires that you opt-in to the new Amazon
Resource Name (ARN) and resource identifier (ID) formats. For more
information, see Amazon Resource Names (ARNs) and IDs.
ecs-using-tags

How to Automate the AWS Data Migration Services

Is there any way to schedule the DMS task at specific Time. In AWS Console I didn't find any related options.
Take a look at Automating AWS DMS Migration Tasks | AWS Database Blog:
Currently, DMS tasks cannot be scheduled using the DMS console. To
schedule DMS tasks, we need to use the native tools present in the OS
(Windows or Linux) that you use to access AWS resources. This blog
post shows you how to do so.
Moving forward, the process that this post describes should greatly
simplify automated task deployments and modification scenarios.
Following is a detailed description on how to automatically execute
the DMS task or schedule it for execution in Linux and Windows.