Get latest job revision while submitting AWS batch job without specifying the exact revision number - amazon-web-services

I am using AWSBatch Java client com.amazonaws.services.batch (AWS SDK for Java - 1.11.483) to submit jobs programmatically.
However, our scientists keep updating the job definition.
Every time there is a new job definition, I have to update the environment variable with the revision number to pass it to the client.
AWS documentation states that
This value can be either a name:revision or the Amazon Resource Name (ARN) for the job definition.
Is there any way I can default it to the latest revision and every time I submit a BatchJob, the latest revision will get picked without even knowing the last revision?

This value can be either a name:revision or the Amazon Resource Name (ARN) for the job definition.
Seems like AWS didn't document this properly: revision is optional, you can use simply use name instead of name:revision and it will get the ACTIVE revision of your job definition. It's also optional for Job Definition ARNs.
This also applies for boto3 and for AWS Step Functions integration with AWS Batch, and probably all other interfaces where Job Definition name or ARN are required.

From AWS Batch SubmitJob API reference.
jobDefinition
The job definition used by this job. This value can be one of name,
name:revision, or the Amazon Resource Name (ARN) for the job
definition. If name is specified without a revision then the latest
active revision is used.
perhaps the documentation is updated by now.

I could not find any Java SDK function but I ended up using a bash script that fetches the latest* revision number from AWS.
$ aws batch describe-job-definitions --job-definition-name ${full_name} \
--query='jobDefinitions[?status==`ACTIVE`].revision' --output=json \
--region=${region} | jq '.[0]'
(*) The .[0] will pick the first object from a list of active revisions, I used this because, by default, AWS Batch adds the latest revision to the top. You can set it as .[-1] if you want the last one.

Related

AWS CDK - Exclude stage name from logical ID of resource

I have a CDK project where initially it was deployed via CLI. I am now wrapping it in a pipelines construct.
Old:
Project
|
Stacks
|
Resources
New:
Project
|
Pipeline
|
Stage
|
Stacks
|
Resources
The issue I'm running into is that there are resources I would rather not be deleted in the application, however adding the stage causes the logical ID's to change to Stage-Stack-Resource from Stack-Resource. I found this article that claims you can provide an id of 'Default' to a resource, and cause it to go unused in the process of making the logical ID. however for some reason when I pass an Id of Default to the stage it simply uses that "Default" literal value instead of omitting it.
End goal is that I can keep my existing cloudformation resources, but have them deployed via this pipeline.
You can override the logical id manually like this:
S3 example:
const cfnBucket = s3Bucket.node.defaultChild as aws_s3.CfnBucket;
cfnBucket.overrideLogicalId('CUSTOMLOGICALID');
However, if you did not specify a logical id initially and do it now, CloudFormation will delete the original resource and create a new one with the new custom logical id because CloudFormation identifies resources by their logical ID.
Stage is something you define and it is not related to CloudFormation. You are probably using it in your Stack name or in your Resource names and that's why it gets included in the logical id.
Based on your project description, the only option to not have any resources deleted is: make one of the pipeline stages use the exact same stack name and resource names (without stage) as the CLI deployed version.
I ended up doing a full redeploy of the application. Luckily this was a development environment where trashing our data stores isn't a huge loss. But would be much more of a concern in a production environment.

AWS CodeBuild/CodePipeline user input parameters

I'm new to AWS and transitioning from Azure. I would like to create a pipeline in CodePipeline which asks the user for input (for example: the user needs to input a value for the variable "hello"), and uses that input to run a CodeBuild project. In Azure DevOps this was quite easy to define in the pipeline YML specification, but I can't seem to find a way to easily do this in AWS, or am I missing something?
AWS CodePipeline not supporting this feature currently. What you can do is, pass this parameter in your commit message (if pipeline trigger on commits to branches) or in your Git tag (if pipeline trigger on git tag push).
example:
commit message: my commit message [my_var]
git tag: my_var-1.0.0
Then in your buildspec.yml file collect the commit message or tag and check whether it contains your required parameters. If so execute the next commands otherwise exit the script.

AWS: Delete lambda layer still retains layer version history

I am deploying a AWS Lambda layer using aws cli:
aws lambda publish-layer-version --layer-name my_layer --zip-file fileb://my_layer.zip
I delete it using
VERSION=$(aws lambda list-layer-versions --layer-name my_layer | jq '.LayerVersions[0].Version'
aws lambda delete-layer-version --layer-name my_layer --version-number $VERSION
Deletes successfully, ensured no other version of the layer exists.
aws lambda list-layer-versions --layer-name my_layer
>
{
"LayerVersions": []
}
Upon next publishing of the layer, it still retains history of previous version. From what I read if no layer version exists and no reference exists, the version history should be gone, but I don't see that. Anybody have a solution to HARD delete the layer with its version?
I have the same problem. What I'm trying to achieve is to "reset" the version count to 1 in order to match code versioning and tags on my repo. Currently, the only way I found is to publish a new layer with a new name.
I think AWS Lambda product lacks of features that help (semantic) versioning.
Currently there's no way to do this. Layer Versions are immutable, they cannot be updated or modified, you can only delete and publish new layer versions. Once a layer version is 'stamped' -- there is no way (AFAIK) that you can go back and get back that layer version.
It might be that after a while (weeks...months?) AWS delete it's memory of the layer version, but as it stands, the version number assigned to any deleted layer cannot be assumed by any new layer.
I ran into similar problem with layer versions. Do you have suggestions for simple way out instead of writing code to check available versions out there and pick latest one...
I am facing the same issue, As there was no aws-cli command to delete the layer itself,I had delete all versions of my lambda-layer using:
aws lambda delete-layer-version --layer-name test_layer --version-number 1
After deleting all versions of the layer, the layer was not showing in aws lambda layers page,So I thought its successfully deleted.
But to my surprise,AWS stills keeps the data about our deleted layers(at-least the last version for sure), when you try create a lambda layer with your previous deleted lambda name using aws gui or cli, the version will not start from 1, instead it starts from last_version of assumed to be deleted layer.

Downgrade to previous version of AWS Lambda

Working with Amazon Lambda functions I use versioning feature which is provided by AWS Lambda functionality. Each time when I deployed new version of my artifact to AWS I create new version of function and publish it (using popup from screenshot).
But how can I publish any previous version of my function (for example when I need to rollback my last publication)?
You should provide each new version with an alias.
From the AWS Documentation
In contrast, instead of specifying the function ARN, suppose that you specify an alias ARN in the notification configuration (for example, PROD alias ARN). As you promote new versions of your Lambda function into production, you only need to update the PROD alias to point to the latest stable version. You don't need to update the notification configuration in Amazon S3.
The same applies when you need to roll back to a previous version of
your Lambda function. In this scenario, you just update the PROD alias
to point to a different function version. There is no need to update
event source mappings.
One solution I've found that works if your in a pinch is to go to a previous (working) version of lambda, download the deployment package, redeploy the downloaded zip package using the aws cli. I'm sure there is a more elegant solution but if you're in a pinch and you need something right now this works.
$ aws lambda update-function-code \
--function-name my_lambda_function \
--zip-file fileb://function.zip
In order to rollback to a specific version, you need to point the alias that are assigned to the current version to the version you want to rollback to.
For example: My latest version is 20 and has an alias 'Active'. For me to rollback or remove the version 20, I need to remove the alias or reassign it to another version. So if I point my alias to version 17 then lambda will take the version 17 as the default or prod version.
you can update the alias here:
https://myRegion.console.aws.amazon.com/lambda/home?region=myRegion#/functions/functionName/aliases/Active?tab=graph
(Update myRegion and functionName with relevant values.)
In the above specified page go to 'Aliases' section, click on 'Version' dropdown (by default it will display the version for which the alias is assigned to). Select the version that your alias want to point to and click on save.
Thats All !!!
There is no such feature in Lambda function.

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}