New version of task definition in ECS Fargate - amazon-web-services

I have a ECS cluster running.I want to create a new version of task definition using awscli.
I know i need to use below command to create new version.
aws ecs register-task-definition --family API-servie-fetch --cli-input-json file://TD-DC.json
But i am not getting from where to get this JSON file "file://TD-DC.json" ?
I belive i have to update image tag and version number in this file but from where i can get this file ?
Note:- My task is already running and i only want to update it with new image rest all parameters should be same.

You can obtained current task definition in json format using describe-task-definition. Once you have it, you can modify it as you want, and then upload as new version.
If you work in command line, you can use jq to modify/process the original task definition in json format.

Related

What is the aws-cli command for AWS Macie to create a job?

actually I want to create a job in AWS macie using the aws cli.
I ran the following command:-
aws macie2 create-classification-job --job-type "ONE_TIME" --name "maice-poc" --s3-job-definition bucketDefinitions=[{"accountID"="254378651398", "buckets"=["maice-poc"]}]
but it is giving me an error:-
Unknown options: buckets=[maice-poc]}]
Can someone give me a correct command?
The s3-job-definition requires a structure as value.
And in your case, you want to pass in a JSON-formatted structure parameter, so you should wrap the JSON starting with bucketDefinitions in single quotes. Also instead of = use the JSON syntax : for key-value pairs.
The following API call should work:
aws macie2 create-classification-job --job-type "ONE_TIME" --name "macie-poc" --s3-job-definition '{"bucketDefinitions":[{"accountId":"254378651398", "buckets":["maice-poc"]}]}'

AWS CDK create Lambda from image

I am a new to AWS world. I am working on a project to build a server less application and as part of that I have created 4 lambda which works fine.
Next I am trying to create a deployment pipe line using CDK; below is what I am trying to do.
create an docker image with code that includes all lambda code
create 4 different lambdas from same image just override the CMD in docker image and mention the lambda handler
I have setup CDK locally and able to create stack, everything works fine.
Below is my code snippet
--create the docker image
asset_img = aws_ecr_assets.DockerImageAsset(
self,
"test_image",
directory=os.path.join(os.getcwd(), "../mysrc")
)
--create lambda from docker image
aws_lambda.DockerImageFunction(
self,
function_name="gt_from_image",
code=_lambda.DockerImageCode.from_ecr(
self,
repository=asset_img.repository,
tag="latest")
)
Below is the error I am getting
TypeError: from_ecr() got multiple values for argument 'repository'
I am not sure how I can reference the image that was created and define the lambda.
Solved: Below is the solution
asset_img = _asset.DockerImageAsset(self, "test_image", directory=os.path.join(os.getcwd(), "../gt"))
_lambda.DockerImageFunction(self, id='gt_from_image', function_name="gt_from_image_Fn",
code=_lambda.DockerImageCode.from_ecr(
repository=asset_img.repository,
tag=asset_img.source_hash))
From the documentation for DockerImageCode.from_ecr(), it does not expect a scope argument, so your self argument is what is causing the error.
Another issue is that DockerImageAsset will not tag the image as latest, as that is against AWS best practices.
The easy way to achieve what you are doing is to use
DockerImageCode.from_image_asset().

Get latest job revision while submitting AWS batch job without specifying the exact revision number

I am using AWSBatch Java client com.amazonaws.services.batch (AWS SDK for Java - 1.11.483) to submit jobs programmatically.
However, our scientists keep updating the job definition.
Every time there is a new job definition, I have to update the environment variable with the revision number to pass it to the client.
AWS documentation states that
This value can be either a name:revision or the Amazon Resource Name (ARN) for the job definition.
Is there any way I can default it to the latest revision and every time I submit a BatchJob, the latest revision will get picked without even knowing the last revision?
This value can be either a name:revision or the Amazon Resource Name (ARN) for the job definition.
Seems like AWS didn't document this properly: revision is optional, you can use simply use name instead of name:revision and it will get the ACTIVE revision of your job definition. It's also optional for Job Definition ARNs.
This also applies for boto3 and for AWS Step Functions integration with AWS Batch, and probably all other interfaces where Job Definition name or ARN are required.
From AWS Batch SubmitJob API reference.
jobDefinition
The job definition used by this job. This value can be one of name,
name:revision, or the Amazon Resource Name (ARN) for the job
definition. If name is specified without a revision then the latest
active revision is used.
perhaps the documentation is updated by now.
I could not find any Java SDK function but I ended up using a bash script that fetches the latest* revision number from AWS.
$ aws batch describe-job-definitions --job-definition-name ${full_name} \
--query='jobDefinitions[?status==`ACTIVE`].revision' --output=json \
--region=${region} | jq '.[0]'
(*) The .[0] will pick the first object from a list of active revisions, I used this because, by default, AWS Batch adds the latest revision to the top. You can set it as .[-1] if you want the last one.

DataFlow gcloud CLI - "Template metadata was too large"

I've honed my transformations in DataPrep, and am now trying to run the DataFlow job directly using gcloud CLI.
I've exported my template and template metadata file, and am trying to run them using gcloud dataflow jobs run and passing in the input & output locations as parameters.
I'm getting the error:
Template metadata regex '[ \t\n\x0B\f\r]*\{[ \t\n\x0B\f\r]*((.|\r|\n)*".*"[ \t\n\x0B\f\r]*:[ \t\n\x0B\f\r]*".*"(.|\r|\n)*){17}[ \t\n\x0B\f\r]*\}[ \t\n\x0B\f\r]*' was too large. Max size is 1000 but was 1187.
I've not specified this at the command line, so I know it's getting it from the metadata file - which is straight from DataPrep, unedited by me.
I have 17 input locations - one containing source data, all the others are lookups. There is a regex for each one, plus one extra.
If it's running when prompted by DataPrep, but won't run via CLI, am I missing something?
In this case I'd suspect the root cause is a limitation in gcloud that is not present in the Dataflow API or Dataprep. The best thing to do in this case is to open a new Cloud Dataflow issue in the public tracker and provide details there.

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}