How to check if specific resource already exists in CloudFormation script - amazon-web-services

I am using cloudformation to create a stack which inlcudes an autoscaled ec2 instance and an S3 bucket. For the S3 bucket I have DeletionPolicy set to Retain, which works fine, until I want to rerun my cloudformation script again. Since on previous runs, the script created the S3 bucket, it fails on subsequent runs saying my S3 bucket already exists. None of the other resources of course get created as well. My question is how do I check if my S3 bucket exists first inside the cloudformation script, and if it does, then skip creating that resources. I've looked in conditions in the AWS, but it seems all parameter based, I have yet to find a function which checks from existing resources.

There is no obvious way to do this, unless you create the template dynamically with an explicit check. Stacks created from the same template are independent entities, and if you create a stack that contains a bucket, delete the stack while retaining the bucket, and then create a new stack (even one with the same name), there is no connection between this new stack and the bucket created as part of the previous stack.
If you want to use the same S3 bucket for multiple stacks (even if only one of them exists at a time), that bucket does not really belong in the stack - it would make more sense to create the bucket in a separate stack, using a separate template (putting the bucket URL in the "Outputs" section), and then referencing it from your original stack using a parameter.
Update November 2019:
There is a possible alternative now. On Nov 13th AWS launched CloudFormation Resource Import. With that feature you can now creating a stack from existing resources. Currently not many resource types are supported by this feature, but S3 buckets are.
In your case you'd have to do it in two steps:
Create a template that only contains the preexisting S3 bucket using the "Create stack" > "With existing resources (import resources)" (this is the --change-set-type IMPORT flag in the CLI) (see docs)
Update the the template to include all resources that don't already exist.
As they note in their documentation; this feature is very versatile. So it opens up a lot of possibilities. See docs for more info.

Using cloudformation you can use Conditions
I created an input parameter "ShouldCreateBucketInputParameter" and then using CLI you just need to set "true" or "false"
Cloudformation json file:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "",
"Parameters": {
"ShouldCreateBucketInputParameter": {
"Type": "String",
"AllowedValues": [
"true",
"false"
],
"Description": "If true then the S3 bucket that will be proxied will be created with the CloudFormation stack."
}
},
"Conditions": {
"CreateS3Bucket": {
"Fn::Equals": [
{
"Ref": "ShouldCreateBucketInputParameter"
},
"true"
]
}
},
"Resources": {
"SerialNumberBucketResource": {
"Type": "AWS::S3::Bucket",
"Condition": "CreateS3Bucket",
"Properties": {
"AccessControl": "Private"
}
}
},
"Outputs": {}
}
And then (I am using CLI do deploy the stack)
aws cloudformation deploy --template ./s3BucketWithCondition.json --stack-name bucket-stack --parameter-overrides ShouldCreateBucketInputParameter="true" S3BucketNameInputParameter="BucketName22211"

Just add an input parameter to the CloudFormation template to indicate that an existing bucket should be used.... unless you don't already know at the time when you are going to use the template? Then you can either add a new resource or not based on the parameter value.

If you do updates, (potentially of stacks within stacks aka Nested Stacks), the unchanged parts don't get updated.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html?icmpid=docs_cfn_console_designer
You can then set policies as mentioned to prevent deletion. [remember 'cancel update' permissions for rollbacks]
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html
There is also Cross-Stack Output to be aware of by adding Export Names to the Stack Outputs.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html
Walkthrough...
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-crossstackref.html
Then you need to use Fn::ImportValue ...
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html
It implies one could use a network stack name parameter.
Unfortunately you get an error like this when you try them in Conditions.
Template validation error: Template error: Cannot use Fn::ImportValue
in Conditions.
Or in the Parameters?
Template validation error: Template format error: Every Default member
must be a string.
Also this can happen while trying...
Template format error: Output ExportOut is malformed. The Name field
of Export must not depend on any resources, imported values, or
Fn::GetAZs.
So you can't stop it making the existing resource again from the same file. Only when putting it into another stack and using the export import reference.
But if you separate the two then there is a dependency that will stop and rollback for instance a dependency's deletion, thanks to the reference via the ImportValue function.
Example Given here is:
First Make a Group Template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {
"AWS::CloudFormation::Designer": {
"6927bf3d-85ec-449d-8ee1-f3e1804d78f7": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": -390,
"y": 130
},
"z": 0,
"embeds": []
},
"6fe3a2b8-16a1-4ce0-b412-4d4f87e9c54c": {
"source": {
"id": "ac295134-9e38-4425-8d20-2c50ef0d51b3"
},
"target": {
"id": "6927bf3d-85ec-449d-8ee1-f3e1804d78f7"
},
"z": 1
}
}
},
"Resources": {
"TestGroup": {
"Type": "AWS::IAM::Group",
"Properties": {},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "6927bf3d-85ec-449d-8ee1-f3e1804d78f7"
}
},
"Condition": ""
}
},
"Parameters": {},
"Outputs": {
"GroupNameOut": {
"Description": "The Group Name",
"Value": {
"Ref": "TestGroup"
},
"Export": {
"Name": "Exported-GroupName"
}
}
}
}
Then make a User Template that needs the group.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {
"AWS::CloudFormation::Designer": {
"ac295134-9e38-4425-8d20-2c50ef0d51b3": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": -450,
"y": 130
},
"z": 0,
"embeds": [],
"isrelatedto": [
"6927bf3d-85ec-449d-8ee1-f3e1804d78f7"
]
},
"6fe3a2b8-16a1-4ce0-b412-4d4f87e9c54c": {
"source": {
"id": "ac295134-9e38-4425-8d20-2c50ef0d51b3"
},
"target": {
"id": "6927bf3d-85ec-449d-8ee1-f3e1804d78f7"
},
"z": 1
}
}
},
"Resources": {
"TestUser": {
"Type": "AWS::IAM::User",
"Properties": {
"UserName": {
"Ref": "UserNameParam"
},
"Groups": [
{
"Fn::ImportValue": "Exported-GroupName"
}
]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "ac295134-9e38-4425-8d20-2c50ef0d51b3"
}
}
}
},
"Parameters": {
"UserNameParam": {
"Default": "testerUser",
"Description": "Username For Test",
"Type": "String",
"MinLength": "1",
"MaxLength": "16",
"AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*",
"ConstraintDescription": "must begin with a letter and contain only alphanumeric characters."
}
},
"Outputs": {
"UserNameOut": {
"Description": "The User Name",
"Value": {
"Ref": "TestUser"
}
}
}
}
You will get
No export named Exported-GroupName found. Rollback requested by user.
if running User with no Group found Exported.
You could then use the Nested stack approach.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {
"AWS::CloudFormation::Designer": {
"66470873-b2bd-4a5a-af19-5d54b11f48ef": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": -815,
"y": 169
},
"z": 0,
"embeds": []
},
"ed1de011-f1bb-4788-b63e-dcf5494d10d1": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": -710,
"y": 170
},
"z": 0,
"dependson": [
"66470873-b2bd-4a5a-af19-5d54b11f48ef"
]
},
"c978f2d9-3fb2-4420-b255-74941f10a28a": {
"source": {
"id": "ed1de011-f1bb-4788-b63e-dcf5494d10d1"
},
"target": {
"id": "66470873-b2bd-4a5a-af19-5d54b11f48ef"
},
"z": 1
}
}
},
"Resources": {
"GroupStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3-us-west-2.amazonaws.com/cf-templates-x-TestGroup.json"
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "66470873-b2bd-4a5a-af19-5d54b11f48ef"
}
}
},
"UserStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3-us-west-2.amazonaws.com/cf-templates-x-TestUserFindsGroup.json"
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "ed1de011-f1bb-4788-b63e-dcf5494d10d1"
}
},
"DependsOn": [
"GroupStack"
]
}
}
}
Unfortunately you can still delete the User stack even though it was made by MultiStack in this example but with deletion policies and other things it just might help.
Then you are only Updating the various stacks it creates, and you won't do the Multi Stack if you're for instance reusing a Bucket.
Otherwise you'll be looking at APIs and scripts in various flavors.

If you're trying to incorporate some existing resources into CF, it is unfortunately not possible. If you just want a set of resources to be part of your template or not depending on the value of some parameters, you can use Conditions. But they don't change the nature of CF itself, and only work to determine which resources are desired, not what actions will be taken, and cannot see whether a resource exists or not beforehand.

Something not explicitly stated. If your first deployment fails, resources will be deleted unless you have at retention policy. In this case it is safe to delete the resource in question manually. Next deployment will recreate it without generating error that resource already exists.

Related

Use env variables inside cloudformation templates

i have an amplify app with multiple branches.
I have added an custom cloudformation template with amplify add custom
It looks like:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": { "env": { "Type": "String" } },
"Resources": {
"db": {
"Type": "AWS::Timestream::Database",
"Properties": { "DatabaseName": "dev_db" }
},
"timestreamtable": {
"DependsOn": "db",
"Type": "AWS::Timestream::Table",
"Properties": {
"DatabaseName": "dev_db",
"TableName": "avg_16_4h",
"MagneticStoreWriteProperties": { "EnableMagneticStoreWrites": true },
"RetentionProperties": {
"MemoryStoreRetentionPeriodInHours": "8640",
"MagneticStoreRetentionPeriodInDays": "1825"
}
}
}
},
"Outputs": {},
"Description": "{\"createdOn\":\"Windows\",\"createdBy\":\"Amplify\",\"createdWith\":\"8.3.1\",\"stackType\":\"custom-customCloudformation\",\"metadata\":{}}"
}
You can see there is a field called DatabaseName. In my amplify app i have written an env variable named TIMESTREAM_DB and i want to use it inside of this cloudformation file.
Is this possible or do i need to write it all by hand in it?
Templates cannot access arbitrary env vars. Instead, CloudFormation injects deploy-time values into a template with Parameters.
Amplify helpfully adds the env variable as a parameter. A la the Amplify docs, use the env value as the AWS::Timestream::Database name suffix:
"DatabaseName": "Fn::Join": [ "", [ "my-timestream-db-name-", { "Ref": "env" } ] ]
The AWS::Timestream::Table resource also requires a DatabaseName parameter. You could repeat the above, but it's more DRY to get the name via the Database's Ref:
"DatabaseName": { "Ref" : "db" }

AWS Cloudformation "include" transform error

I have some template and other partial definition I'd like to include in main template definition. The sample is below (main template).
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "",
"Parameters": {
"Environment": {
"Type": "String",
"Description": "Specify Environment: prod | dev ",
"AllowedValues": [ "prod", "dev" ],
"Default": "dev"
}
},
"Transform": {
"Name": "AWS::Include",
"Parameters": {
"Location": "s3://some-s3-local-bucket/part-1.json"
}
},
"Resources": {
},
"Outputs": {
}
}
Below is a definition of the part to include in main template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "",
"Resources": {
"hellobucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": { "Fn::Sub": "testbucket-${Environment}" }
}
}
},
"Outputs": {
}
}
When I try to create stack based on such definitions I receive strange error like "Template parameters modified by transform". I don't know/see any reason any parameter could be considered as "modified".
I don't want to create many "nested" stacks, because there is a limit on aws in number of stacks I can create so the goal is to split stack definition into many (well managable) smaller files and based on them to create ONE stack with all the related resources.
How to properly decompose bigger stack definition into smaller files?
I haven't done this before but it might be because you are using the transform to pull in a template that creates an s3 bucket BUT the template you are pulling into the original one has all it's parameter fields etc. empty. I think this is what the error message - Template parameters modified by transform - is relating to. Try removing the empty parameters entry from the S3 template to see if that helps.

AWS CodePipeLine :Execute deploy action in diffent region than the one codepipeline is triggered in

I'm setting up a pipeline to automate cloudformation stack templates deployment.
The pipeline itself is created in the aws eu-west-1 region, but cloudformation stacks templates would be deployed in any other region.
Actually I know and can execute pipeline action in a different account, but I don't see where to specify the region I would like my template to be deployed in, like we do with aws cli : aws --region cloudformation deploy.....
Is there anyway to trigger a pipeline in one region and execute a deploy action in another region please?
The action configuration properties don't offer such possibility...
A workaround would be to run aws cli deploy command from cli in the codebuild container and speficy the good region, But I would like to know if there is a more elegant way to do it
If you're looking to deploy to multiple regions, one after the other, you could create a Code Pipeline pipeline in every region you want to deploy to, and set up S3 cross-region replication so that the output of the first pipeline becomes the input to a pipeline in the next region.
Here's a blog post explaining this further: https://aws.amazon.com/blogs/devops/building-a-cross-regioncross-account-code-deployment-solution-on-aws/
Since late Nov 2018, CodePipeline supports cross regional deploys. However it still leaves a lot to be desired as you need to create artifact buckets in each region and copy over the deployment artifacts (e.g. in the codebuild container as you mentioned) to them before the Deploy action is triggered. So it's not as automated as it could be, but if you go through the process of setting it up, it works well.
CodePipeline now supports cross region deployment and for to trigger the pipeline in different region we can specify the "Region": "us-west-2" property in the action stage for CloudFormation which will trigger the deployment in that specific region.
Steps to follow for this setup:
Create two bucket in two different region which for example bucket in "us-east-1" and bucket in "us-west-2" (We can also use bucket already created by CodePipeline when you will setup pipeline first time in any region)
Configure the pipeline in such a way that is can use respective bucket while taking action in respective account.
specify the region in the action for CodePipeline.
Note: I have attached the sample CloudFormation template which will help you to do the cross region CloudFormation deployment.
{
"Parameters": {
"BranchName": {
"Description": "CodeCommit branch name for all the resources",
"Type": "String",
"Default": "master"
},
"RepositoryName": {
"Description": "CodeComit repository name",
"Type": "String",
"Default": "aws-account-resources"
},
"CFNServiceRoleDeployA": {
"Description": "CFN service role for create resourcecs for account-A",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/CloudFormation-service-role-cp"
},
"CodePipelineServiceRole": {
"Description": "Service role for codepipeline",
"Type": "String",
"Default": "arn:aws:iam::xxxxxxxxxxxxxx:role/AWS-CodePipeline-Service"
},
"CodePipelineArtifactStoreBucket1": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-east-1"
},
"CodePipelineArtifactStoreBucket2": {
"Description": "S3 bucket to store the artifacts",
"Type": "String",
"Default": "bucket-us-west-2"
}
},
"Resources": {
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Name": {"Fn::Sub": "${AWS::StackName}-cross-account-pipeline" },
"ArtifactStores": [
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket1"
}
},
"Region": "us-east-1"
},
{
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket2"
}
},
"Region": "us-west-2"
}
],
"RoleArn": {
"Ref": "CodePipelineServiceRole"
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeCommit"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"BranchName": {
"Ref": "BranchName"
},
"RepositoryName": {
"Ref": "RepositoryName"
},
"PollForSourceChanges": true
},
"RunOrder": 1
}
]
},
{
"Name": "Deploy-to-account-A",
"Actions": [
{
"Name": "stage-1",
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"ActionTypeId": {
"Category": "Deploy",
"Owner": "AWS",
"Version": 1,
"Provider": "CloudFormation"
},
"Configuration": {
"ActionMode": "CREATE_UPDATE",
"StackName": "cloudformation-stack-name-account-A",
"TemplatePath":"SourceOutput::accountA.json",
"Capabilities": "CAPABILITY_IAM",
"RoleArn": {
"Ref": "CFNServiceRoleDeployA"
}
},
"RunOrder": 2,
"Region": "us-west-2"
}
]
}
]
}
}
}
}

AWS Nested Stacks - Referencing a Parent Stack's Resource

I'm trying to pass resources (ApiGatewayRestApi and a custom authorizer) to a nested stack through stack parameters, however, they continually fail with Embedded stack <stack_name> was not successfully created: The following resource(s) failed to create. Here's my set up in Serverless:
Parent Stack
{
...
"NestedStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"Parameters": {
"ServerlessDeploymentBucket": {
"Ref": "ServerlessDeploymentBucket"
},
"ApiGatewayRestApi": {
"Ref": "ApiGatewayRestApi"
},
"AuthDashjwtApiGatewayAuthorizer": {
"Ref": "AuthDashjwtApiGatewayAuthorizer"
},
},
"TemplateURL": "..."
}
},
}
Nested Stack
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Nested Stack",
"Parameters": {
"ServerlessDeploymentBucket": { "Type": "String" },
"ApiGatewayRestApi": {
"Description": "Rest API",
"Type": "String"
},
"AuthDashjwtApiGatewayAuthorizer": { "Type": "String" },
},
"Resources": {
"ApiGatewayMethodEventsEventidVarStreamsPost": {
"Type": "AWS::ApiGateway::Method",
"Properties": {
"HttpMethod": "POST",
"RequestParameters": {},
"ResourceId": { "Ref": "ApiGatewayResourceEventsEventidVarStreams" },
"RestApiId": { "Ref": "ApiGatewayRestApi" },
"AuthorizationType": "CUSTOM",
"AuthorizerId": { "Ref": "AuthDashjwtApiGatewayAuthorizer" },
...
}
...
}
...
}
Am I not referencing or passing in parameters correctly?
Update based on comments
Unless I'm missing something, the only error message in the CF section of the console is:
Embedded stack <stac_name> was not successfully created: The
following resource(s) failed to create: [PatchDasheventLogGroup,
PostDashstreamLogGroup, GetDashstreamsLogGroup, GetDasheventsLogGroup,
ApiGatewayRestApi, GetDasheventLogGroup, PostDasheventLogGroup,
AuthDashjwtApiGatewayAuthorizer]
as far as log groups go, they look like this:
"GetDasheventLogGroup": {
"Type": "AWS::Logs::LogGroup",
"Properties": {
"LogGroupName": "/aws/lambda/live-api-local-get-event"
}
}
Update 2
The log group issue was due to these logs being moved from the parent stack to the nested stack and needing a new name. In the LogGroup docs I found:
If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
This looks like it may have solved the issue... Some more testing is needed to confirm!
The comment from #speshak eventually lead me to the answer. I didn't need to filter by Failed state, but rather Deleted. This allowed me to see the logs for the nested stack that was created, then deleted with more specific messaging.
What this ended up showing me is that the update-stack process was applying the nested stacks to my current set up before removing all the resources from, what would become, the root stack. So the true problem was that I was accidentally trying to create duplicate resources -- AWS saw a resource in a nested stack that matched the root stack, and kicked out with a validation error even though the resource would have been removed from the root stack... eventually.

Amazon Redshift - Unload to S3 - Dynamic S3 file name

I have been using UNLOAD statement in Redshift for a while now, it makes it easier to dump the file to S3 and then allow people to analysie.
The time has come to try to automate it. We have Amazon Data Pipeline running for several tasks and I wanted to run SQLActivity to execute UNLOAD automatically. I use SQL script hosted in S3.
The query itself is correct but what I have been trying to figure out is how can I dynamically assign the name of the file. For example:
UNLOAD('<the_query>')
TO 's3://my-bucket/' || to_char(current_date)
WITH CREDENTIALS '<credentials>'
ALLOWOVERWRITE
PARALLEL OFF
doesn't work and of course I suspect that you can't execute functions (to_char) in the "TO" line. Is there any other way I can do it?
And if UNLOAD is not the way, do I have any other options how to automate such tasks with current available infrastructure (Redshift + S3 + Data Pipeline, our Amazon EMR is not active yet).
The only thing that I thought could work (but not sure) is not instead of using script, to copy the script into the Script option in SQLActivity (at the moment it points to a file) and reference {#ScheduleStartTime}
Why not use RedshiftCopyActivity to copy from Redshift to S3? Input is RedshiftDataNode and output is S3DataNode where you can specify expression for directoryPath.
You can also specify the transformSql property in RedshiftCopyActivity to override the default value of : select * from + inputRedshiftTable.
Sample pipeline:
{
"objects": [{
"id": "CSVId1",
"name": "DefaultCSV1",
"type": "CSV"
}, {
"id": "RedshiftDatabaseId1",
"databaseName": "dbname",
"username": "user",
"name": "DefaultRedshiftDatabase1",
"*password": "password",
"type": "RedshiftDatabase",
"clusterId": "redshiftclusterId"
}, {
"id": "Default",
"scheduleType": "timeseries",
"failureAndRerunMode": "CASCADE",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
}, {
"id": "RedshiftDataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"tableName": "orders",
"name": "DefaultRedshiftDataNode1",
"type": "RedshiftDataNode",
"database": {
"ref": "RedshiftDatabaseId1"
}
}, {
"id": "Ec2ResourceId1",
"schedule": {
"ref": "ScheduleId1"
},
"securityGroups": "MySecurityGroup",
"name": "DefaultEc2Resource1",
"role": "DataPipelineDefaultRole",
"logUri": "s3://myLogs",
"resourceRole": "DataPipelineDefaultResourceRole",
"type": "Ec2Resource"
}, {
"myComment": "This object is used to control the task schedule.",
"id": "DefaultSchedule1",
"name": "RunOnce",
"occurrences": "1",
"period": "1 Day",
"type": "Schedule",
"startAt": "FIRST_ACTIVATION_DATE_TIME"
}, {
"id": "S3DataNodeId1",
"schedule": {
"ref": "ScheduleId1"
},
"directoryPath": "s3://my-bucket/#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}",
"name": "DefaultS3DataNode1",
"dataFormat": {
"ref": "CSVId1"
},
"type": "S3DataNode"
}, {
"id": "RedshiftCopyActivityId1",
"output": {
"ref": "S3DataNodeId1"
},
"input": {
"ref": "RedshiftDataNodeId1"
},
"schedule": {
"ref": "ScheduleId1"
},
"name": "DefaultRedshiftCopyActivity1",
"runsOn": {
"ref": "Ec2ResourceId1"
},
"type": "RedshiftCopyActivity"
}]
}
Are you able to SSH into the cluster? If so, I would suggest writing a shell script where you can create variables and whatnot, then pass in those variables into a connection's statement-query
By using a redshift procedural wrapper around unload statement and dynamically deriving the s3 path name.
Execute the dynamic query and in your job, call the procedure that dynamically creates the UNLOAD statement and executes the statement.
This way you can avoid the other services. But depends on what kind of usecase you are working on.