AWS ssm send-command : modify timeout in CLI - amazon-web-services

I'm using AWS SSM to compute a long script on an ec2 instance.
I would like to configure the execution timeout (execution time, not launch time) and I don't find how to do this on the official documentation (opposing informations or not working).
I'm using only the CLI interface.

This value is a document property that can be passed with --parameters option using executionTimeout key. You can use aws ssm describe-documents to find this and other document specific parameters.
aws ssm describe-document --name "AWS-RunShellScript"
{
"Document": {
"Hash": "99749de5e62f71e5ebe9a55c2321e2c394796afe7208cff048696541e6f6771e",
"HashType": "Sha256",
"Name": "AWS-RunShellScript",
"Owner": "Amazon",
"CreatedDate": "2017-08-21T22:25:02.029000+02:00",
"Status": "Active",
"DocumentVersion": "1",
"Description": "Run a shell script or specify the commands to run.",
"Parameters": [
{
"Name": "commands",
"Type": "StringList",
"Description": "(Required) Specify a shell script or a command to run."
},
{
"Name": "workingDirectory",
"Type": "String",
"Description": "(Optional) The path to the working directory on your instance.",
"DefaultValue": ""
},
{
"Name": "executionTimeout",
"Type": "String",
"Description": "(Optional) The time in seconds for a command to complete before it is considered to have failed. Default is 3600 (1 hour). Maximum is 172800 (48 hours).",
"DefaultValue": "3600"
}
],
"PlatformTypes": [
"Linux",
"MacOS"
],
"DocumentType": "Command",
"SchemaVersion": "1.2",
"LatestVersion": "1",
"DefaultVersion": "1",
"DocumentFormat": "JSON",
"Tags": []
}
}

Related

Where/how do I define a NotificationConfig in an AWS SSM Automation document?

Say I have an SSM document like the below, and I want to be alerted when a run fails or doesn't finish for whatever reason:
{
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"mainSteps": [
{
"action": "aws:runCommand",
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"inputs": {
"DocumentName": "AWS-RunShellScript",
"Parameters": {
"commands": [
"blahblahblah"
],
"executionTimeout": "1800"
},
"Targets": [
{
"Key": "InstanceIds",
"Values": [
"i-xxxxxxxx"
]
}
]
},
"name": "DBRestorer",
"nextStep": "RunQueries"
},
Terraform documents show me that RunCommand documents should support a NotificationConfig where I can pass in my SNS topic ARN and declare what state transitions should trigger a message: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_maintenance_window_task#notification_config
However, I can't find any Amazon docs that actually include the use of a notification configuration in the document itself (not just the maintenance window, which I have set up as automation so it doesn't support it at the window level), so I'm not sure if it belongs as a sub-parameter, or whether to define it with camel case or dash separation.
Try this
{
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"mainSteps": [
{
"action": "aws:runCommand",
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"inputs": {
"DocumentName": "AWS-RunShellScript",
"NotificationConfig": {
"NotificationArn": "<<Replace this with a SNS Topic Arn>>",
"NotificationEvents": ["All"],
"NotificationType": "Invocation"
},
"ServiceRoleArn": "<<Replace this with an IAM role Arn that has access to SNS>>",
"Parameters": {
"commands": [
"blahblahblah"
],
"executionTimeout": "1800"
},
"Targets": [
{
"Key": "InstanceIds",
"Values": [
"i-xxxxxxxx"
]
}
]
},
"name": "DBRestorer",
"nextStep": "RunQueries"
},
...
]
}
Related documentation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-action-runcommand.html
https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_NotificationConfig.html#systemsmanager-Type-NotificationConfig-NotificationType

GetParameter VS GetParameters

What is the difference between AWS SSM GetParameter and GetParameters ?
I have a machine with an IAM policy GetParameters and try to read a variable with terraform with the following code:
data "aws_ssm_parameter" "variable" { name = "variable"}
I get an error indicating I'm not authorized to perform GetParameter.
Like the name suggests.
GetParameter provides details about only one parameter per API call.
GetParameters provides details about multiple parameters in one API call.
The parameter details returned are exactly same for both calls, as the two calls return Parameter object:
"Parameter": {
"ARN": "string",
"DataType": "string",
"LastModifiedDate": number,
"Name": "string",
"Selector": "string",
"SourceResult": "string",
"Type": "string",
"Value": "string",
"Version": number
}
The key benefit of the GetParameters is that you can fetch many parameters in a single API call which saves time.
Example use of GetParameter:
aws ssm get-parameter --name /db/password
{
"Parameter": {
"Name": "/db/password",
"Type": "String",
"Value": "secret password",
"Version": 1,
"LastModifiedDate": 1589285865.183,
"ARN": "arn:aws:ssm:us-east-1:xxxxxxxxx:parameter/db/password",
"DataType": "text"
}
}
Example use of GetParameters with two parameters:
aws ssm get-parameters --name /db/password /db/url
{
"Parameters": [
{
"Name": "/db/password",
"Type": "String",
"Value": "secret password",
"Version": 1,
"LastModifiedDate": 1589285865.183,
"ARN": "arn:aws:ssm:us-east-1:xxxxxxxxx:parameter/db/password",
"DataType": "text"
},
{
"Name": "/db/url",
"Type": "String",
"Value": "url to db",
"Version": 1,
"LastModifiedDate": 1589285879.912,
"ARN": "arn:aws:ssm:us-east-1:xxxxxxxxx:parameter/db/url",
"DataType": "text"
}
],
"InvalidParameters": []
}
Example use of GetParameters with non-existing second parameter (/db/wrong)
aws ssm get-parameters --name /db/password /db/wrong
{
"Parameters": [
{
"Name": "/db/password",
"Type": "String",
"Value": "secret password",
"Version": 1,
"LastModifiedDate": 1589285865.183,
"ARN": "arn:aws:ssm:us-east-1:xxxxxxxxx:parameter/db/password",
"DataType": "text"
}
],
"InvalidParameters": [
"/db/wrong"
]
}

AWS CodePipeline "An AppSpec file is required, but could not be found in the revision"

I'm trying to set up a deployment pipeline using CodeCommit, ECR and ECS. My pipeline passes the source and build steps fine. I can deploy manually via CodeDeploy if I upload my appspec.yaml file to an s3 bucket. Deploys triggered by a change to my CodeCommit repository always fail with the error:
An AppSpec file is required, but could not be found in the revision
When I look at the details of the failed deployment, I can pull up the revision location, which shows this:
I see in the troubleshooting code deploy section that some editors can cause issues. I'm using vscode on linux, so I don't think that should be an issue. Also, if I upload the same appspec file to s3 and reference it from a manual deployment, it works fine.
I've also tried uploading the same file, but named appspec.yml. Still failed.
The role that this deployment uses has full s3 access, not sure if it could be any other permissions-related problem.
Here is my codepipeline definition:
{
"pipeline": {
"roleArn": "arn:aws:iam::690517313378:role/service-role/AWSCodePipelineServiceRole-us-east-1-blottermappertf",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"region": "us-east-1",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "CodeCommit"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"configuration": {
"PollForSourceChanges": "false",
"BranchName": "master",
"RepositoryName": "blottermapper"
},
"runOrder": 1
}
]
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"name": "Build",
"region": "us-east-1",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"version": "1",
"provider": "CodeBuild"
},
"outputArtifacts": [
{
"name": "BuildArtifact"
}
],
"configuration": {
"ProjectName": "blottermapper",
"EnvironmentVariables": "[{\"name\":\"REPOSITORY_URI\",\"value\":\"690517313378.dkr.ecr.us-east-1.amazonaws.com/net.threeninetyfive\",\"type\":\"PLAINTEXT\"}]"
},
"runOrder": 1
}
]
},
{
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "BuildArtifact"
}
],
"name": "Deploy",
"region": "us-east-1",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "blottermappertf",
"DeploymentGroupName": "blottermappertf"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-634554346591"
},
"name": "blottermappertf",
"version": 1
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:us-east-1:690517313378:blottermappertf",
"updated": 1573712712.49,
"created": 1573712712.49
}
}
"An AppSpec file is required, but could not be found in the revision"
The above error is related to the wrong configuration for your codepipeline. To perform ECS codedeploy deployments, the provider in your codepipeline stage for deployment must be "ECS (blue/green)" not "Codedeploy" (codedeploy is used for EC2 deployments.)
Even though in the back-end it uses codedeploy, the name of the provider is "ECS (blue/green)".
I found the answer here:
The deployment specifies that the revision is a null file, but the revision provided is a zip file
I was using the wrong action provider when setting up my deployment. I chose ECS and I should have chosen ECS Blue/Green.
The ambiguous error message made debugging and searching for answers on stack overflow difficult for me.

AWS CodePipeLine :Execute deploy action in diffent region than the one codepipeline is triggered in

I'm setting up a pipeline to automate cloudformation stack templates deployment.
The pipeline itself is created in the aws eu-west-1 region, but cloudformation stacks templates would be deployed in any other region.
Actually I know and can execute pipeline action in a different account, but I don't see where to specify the region I would like my template to be deployed in, like we do with aws cli : aws --region cloudformation deploy.....
Is there anyway to trigger a pipeline in one region and execute a deploy action in another region please?
The action configuration properties don't offer such possibility...
A workaround would be to run aws cli deploy command from cli in the codebuild container and speficy the good region, But I would like to know if there is a more elegant way to do it
If you're looking to deploy to multiple regions, one after the other, you could create a Code Pipeline pipeline in every region you want to deploy to, and set up S3 cross-region replication so that the output of the first pipeline becomes the input to a pipeline in the next region.
Here's a blog post explaining this further: https://aws.amazon.com/blogs/devops/building-a-cross-regioncross-account-code-deployment-solution-on-aws/
Since late Nov 2018, CodePipeline supports cross regional deploys. However it still leaves a lot to be desired as you need to create artifact buckets in each region and copy over the deployment artifacts (e.g. in the codebuild container as you mentioned) to them before the Deploy action is triggered. So it's not as automated as it could be, but if you go through the process of setting it up, it works well.
CodePipeline now supports cross region deployment and for to trigger the pipeline in different region we can specify the "Region": "us-west-2" property in the action stage for CloudFormation which will trigger the deployment in that specific region.
Steps to follow for this setup:
Create two bucket in two different region which for example bucket in "us-east-1" and bucket in "us-west-2" (We can also use bucket already created by CodePipeline when you will setup pipeline first time in any region)
Configure the pipeline in such a way that is can use respective bucket while taking action in respective account.
specify the region in the action for CodePipeline.
Note: I have attached the sample CloudFormation template which will help you to do the cross region CloudFormation deployment.
{
"Parameters": {
"BranchName": {
"Description": "CodeCommit branch name for all the resources",
"Type": "String",
"Default": "master"
},
"RepositoryName": {
"Description": "CodeComit repository name",
"Type": "String",
"Default": "aws-account-resources"
},
"CFNServiceRoleDeployA": {
"Description": "CFN service role for create resourcecs for account-A",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/CloudFormation-service-role-cp"
},
"CodePipelineServiceRole": {
"Description": "Service role for codepipeline",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/AWS-CodePipeline-Service"
},
"CodePipelineArtifactStoreBucket1": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-east-1"
},
"CodePipelineArtifactStoreBucket2": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-west-2"
}
},
"Resources": {
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Name": {"Fn::Sub": "${AWS::StackName}-cross-account-pipeline" },
"ArtifactStores": [
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket1"
}
},
"Region": "us-east-1"
},
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket2"
}
},
"Region": "us-west-2"
}
],
"RoleArn": {
"Ref": "CodePipelineServiceRole"
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeCommit"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"BranchName": {
"Ref": "BranchName"
},
"RepositoryName": {
"Ref": "RepositoryName"
},
"PollForSourceChanges": true
},
"RunOrder": 1
}
]
},
{
"Name": "Deploy-to-account-A",
"Actions": [
{
"Name": "stage-1",
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"ActionTypeId": {
"Category": "Deploy",
"Owner": "AWS",
"Version": 1,
"Provider": "CloudFormation"
},
"Configuration": {
"ActionMode": "CREATE_UPDATE",
"StackName": "cloudformation-stack-name-account-A",
"TemplatePath":"SourceOutput::accountA.json",
"Capabilities": "CAPABILITY_IAM",
"RoleArn": {
"Ref": "CFNServiceRoleDeployA"
}
},
"RunOrder": 2,
"Region": "us-west-2"
}
]
}
]
}
}
}
}

Amazon Redshift - Unload to S3 - Dynamic S3 file name

I have been using UNLOAD statement in Redshift for a while now, it makes it easier to dump the file to S3 and then allow people to analysie.
The time has come to try to automate it. We have Amazon Data Pipeline running for several tasks and I wanted to run SQLActivity to execute UNLOAD automatically. I use SQL script hosted in S3.
The query itself is correct but what I have been trying to figure out is how can I dynamically assign the name of the file. For example:
UNLOAD('<the_query>')
TO 's3://my-bucket/' || to_char(current_date)
WITH CREDENTIALS '<credentials>'
ALLOWOVERWRITE
PARALLEL OFF
doesn't work and of course I suspect that you can't execute functions (to_char) in the "TO" line. Is there any other way I can do it?
And if UNLOAD is not the way, do I have any other options how to automate such tasks with current available infrastructure (Redshift + S3 + Data Pipeline, our Amazon EMR is not active yet).
The only thing that I thought could work (but not sure) is not instead of using script, to copy the script into the Script option in SQLActivity (at the moment it points to a file) and reference {#ScheduleStartTime}
Why not use RedshiftCopyActivity to copy from Redshift to S3? Input is RedshiftDataNode and output is S3DataNode where you can specify expression for directoryPath.
You can also specify the transformSql property in RedshiftCopyActivity to override the default value of : select * from + inputRedshiftTable.
Sample pipeline:
{
"objects": [{
"id": "CSVId1",
"name": "DefaultCSV1",
"type": "CSV"
}, {
"id": "RedshiftDatabaseId1",
"databaseName": "dbname",
"username": "user",
"name": "DefaultRedshiftDatabase1",
"*password": "password",
"type": "RedshiftDatabase",
"clusterId": "redshiftclusterId"
}, {
"id": "Default",
"scheduleType": "timeseries",
"failureAndRerunMode": "CASCADE",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
}, {
"id": "RedshiftDataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"tableName": "orders",
"name": "DefaultRedshiftDataNode1",
"type": "RedshiftDataNode",
"database": {
"ref": "RedshiftDatabaseId1"
}
}, {
"id": "Ec2ResourceId1",
"schedule": {
"ref": "ScheduleId1"
},
"securityGroups": "MySecurityGroup",
"name": "DefaultEc2Resource1",
"role": "DataPipelineDefaultRole",
"logUri": "s3://myLogs",
"resourceRole": "DataPipelineDefaultResourceRole",
"type": "Ec2Resource"
}, {
"myComment": "This object is used to control the task schedule.",
"id": "DefaultSchedule1",
"name": "RunOnce",
"occurrences": "1",
"period": "1 Day",
"type": "Schedule",
"startAt": "FIRST_ACTIVATION_DATE_TIME"
}, {
"id": "S3DataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"directoryPath": "s3://my-bucket/#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}",
"name": "DefaultS3DataNode1",
"dataFormat": {
"ref": "CSVId1"
},
"type": "S3DataNode"
}, {
"id": "RedshiftCopyActivityId1",
"output": {
"ref": "S3DataNodeId1"
},
"input": {
"ref": "RedshiftDataNodeId1"
},
"schedule": {
"ref": "ScheduleId1"
},
"name": "DefaultRedshiftCopyActivity1",
"runsOn": {
"ref": "Ec2ResourceId1"
},
"type": "RedshiftCopyActivity"
}]
}
Are you able to SSH into the cluster? If so, I would suggest writing a shell script where you can create variables and whatnot, then pass in those variables into a connection's statement-query
By using a redshift procedural wrapper around unload statement and dynamically deriving the s3 path name.
Execute the dynamic query and in your job, call the procedure that dynamically creates the UNLOAD statement and executes the statement.
This way you can avoid the other services. But depends on what kind of usecase you are working on.