Multiple configuration file for a code pipeline action - amazon-web-services

I want to run a CFN stack from code pipeline, but the paramaters are scatered in different outputs from previous action. The only way I can think of is to run a Lambda action which load all of them and create a new one output out of it.
Is there something more straight forward?

Related

aws_codepipeline inside cdk-pipelines, how?

I'm exploring how to do CI/CD using cdk-pipelines, but while the setup part works, I don't understand the control part. All the examples are really simple inline code Lambda Functions.
How do I "release a change" of a codepipeline.Pipeline inside a cdk_pipelines?
const cdk_pipeline = new pipelines.CodePipeline(...)
cdk_pipeline.addStage(new BuildImageTestStage(...))
// build-image-test-stage.ts
new BuildImageTestStack(...)
// build-image-test-stack.ts
const pipeline = new codepipeline.Pipeline(...)
pipeline.addStage(...CodeStarConnectionsSourceAction...)
pipeline.addStage(...CodeBuildAction...)
It sets up the pipeline just fine, but it doesn't fire off the actual codepipeline itself.
Options that I see:
codepipeline.Pipelines(triggerOnPush: true) - doesn't work because I want the cloud formation to run first THEN build+test. Trigger on push effectively runs parallel.
Completely separate setup from code deployment by using codepipeline_actions.CloudFormationCreateUpdateStackAction
Unsure option:
cdk_pipeline.addStage(..., { post: [] }) - CDK Pipeline post to trigger a code deploy, but I'm unsure on how to "wait" for it to finish.
Separately, I wish cdk-pipelines were named differently than codepipeline. It's just so hard to search.
[Edit after exchanges in the comments]: pipeline.CodePipeline is a wrapper for a codepipeline.Pipeline. It creates a new codepipeline.Pipeline under the hood in its construtor. The const pipeline = new codepipeline.Pipeline(...) line is adding a *second* codepipeline.Pipeline. You almost certainly don't want two. You have a few options:
Option 1: Pass codePipeline: pipeline in the cdk_pipeline constructor. cdk_pipeline will use your codepipeline.Pipeline instead of creating its own.
Option 2: Get rid of your pipeline instance. Refactor your build/test actions as pre or post Steps in the options for cdk_pipline.addStage or addWave. A CodeBuildStep will add a build step, for example. You can add arbitrary CodePipeline actions as steps, too.
AWS pipeline executions always run from beginning to end, each step "waiting" for the next to finish (except if stopped by an error). A CDK pipeline.CodePipeline always starts with the source action defined in its synth property.
The addStage method adds a serial stage to the pipeline. To add parallel stages, use addWave.
A CDK pipeline.CodePipeline makes it easy to create a pipeline that deploys CDK apps. It abstracts away the details of the more general-purpose codepipeline.Pipeline. If your use case is something other than building/testing/deploying CDK apps, though, you may be better off working directly with the latter, as the docs say.

how to edit a already deployed pipeline in data fusion?

I am trying to edit a pipeline which is already deployed I understand that we can duplicate a same pipeline and rename it but how can do make a change in existing pipeline as renaming would require a change in production scheduling jobs as well.
There is one way thru http calls executor..
Open https://<cdf instnace url ..datafusion.googleusercontent.com>/cdap/httpexecutor
Select PUT(to change pipeline code) from drop down and give
namespaces/<namespaces_name>/apps/<pipeline_name>
Go to body part and paste the new pipeline code (export the code of updated pipeline to i.e. json formatted)
Click on SEND and Response would come as "Deploy Complete" with status code 200.

Cloudformation: The resource you requested does not exist

I have a cloudformation stack which has a Lambda function that is mapped as a trigger to an SQS queue.
What happened was that I had to delete the mapping and create it again manually cos I wanted to change the batch size. Now when I want to update the mapping the cloudformation throws an error with The resource you requested does not exist. message.
The resource mapping code looks like this:
"EventSourceMapping":{
"Properties":{
"BatchSize":5,
"Enabled":"true",
"EventSourceArn":{
"Fn::GetAtt":[
"ProcessorQueue",
"Arn"
]
},
"FunctionName":{
"Fn::GetAtt":[
"ProcessorLambda",
"Arn"
]
}
},
"Type":"AWS::Lambda::EventSourceMapping"
}
I know that I've deleted the mapping cloudformation created initially and added it manually which is causing the issue. How do I fix this? Cos I cannot push any update now.
Please help
What you did, from my perspective, it is a mistake. When you use Cloud Formation you are not suppose to apply changes manually. You can, and maybe that's fine since one may don't care about the stack once is created. But since you are trying to update the stack, this tells me that you want to keep the stack and update it on a time basis.
To narrow down your problem, first let make clear that the manually-created mapping is out of sync with your cloud formation stack. So, from a cloud formation perspective, it doesn't matter if you keep that mapping or not. I'm wondering, what would happen if you keep the manually-created mapping and create a new from Cloud Formation? Maybe it will complain, since you would have repeated mappings for the same pair of (lambda,queue). Try this:
Create a change for your stack, where you completely remove the EventSourceMapping resource from your script. This step is to basically clean loosing references. Apply the change set.
Then, and this is where I think you may get some kind of issue, add back again EventSourceMapping to your stack.
If you get errors in the step 2, like "this mapping already exists", you will have to remove the manually-created mapping from the console. And then try again step 2.
You probably know now that you should not have removed the resource manually. If you change the CF, you can update it without changing resources which did not change in CF. You can try to replace the resource with the exact same physical name https://aws.amazon.com/premiumsupport/knowledge-center/failing-stack-updates-deleted/ The other option is to remove the resource from CF, update, and then add it back and update again - from the same doc.
While comments above are valid, I found it interesting, that no one mentioned much simpler option: using SAM commands (sam build/sam deploy). It's understandable that during the development process and designing the architecture, there might be flaws and situations where manual input in the console is necessary, therefore there's something I reference to every time I have similar issue.
Simply comment out the chunk of code that is creating troubles, run sam build/deploy on top of it, CloudFormation stack will recognize that the resource no longer in the template and will delete it.
Now, since the resource is no longer in the architecture anyway(removed manually prior), it will have no issues passing the step and successfully updating the stack.
Then simply uncomment, make any necessary changes (if any) and deploy.
Works every time.

Conditionally execute stage in AWS Codepipeline

I would like to conditionally execute certain stage in AWS Codepipeline depending on that if I put certain file on repo location. So, if I put "some_file.txt" on certain location in repo, I want for Codepipeline to check existence of this file and if it's there continue further to deploy code to production, otherwise stop on that stage.
With this I would like to avoid manual approval action and control release process with committing a file. Is this possible and what would be best practice?
I think you could create a lambda action for that:
Invoke an AWS Lambda function in a pipeline in CodePipeline
The lambda function can access the input artifact, and check if your file of interest is there or not.
Depending on the outcome of the check, the function with either put_job_success_result or put_job_failure_resul to continue or stop the pipeline.
you can use the spec file to check if there's the needed file present. If not, then you can execute a "stop-pipeline-execution" https://docs.aws.amazon.com/cli/latest/reference/codepipeline/stop-pipeline-execution.html
command. The required args can be fetched from the env vars and one more thing to note is to give that stage of yours adequate permission(s) to be able to execute the command.

Is there a way to group my DynamoDB export tasks on one EMR cluster?

When I set up a re-occuring backup via the export function in the DynamoDB console, the task it creates automatically creates a new EMR cluster when it runs. Some of my tables need to be backed up but are fairly small. What I end up with is a huge number of large servers running to back up some relatively small tables. Is there any easy way to chain these tasks to run on one server group in series or parallel?
Yes, it is possible. There is not a direct way but needs some additional tweaking in the Data-Pipeline end. You are required to understand how Data-Pipeline actually runs your export job by default.
When you click on export button on DDB console, it takes you to Data-Pipelines console to create a Pipeline for the export.
After filling out the template, instead of running, you can use Edit in Architect feature to alter the current template which only works with one table.
On the architect page, if you observe the Activities section ,you will find EmrAcvity running a EMR STEP using the following param's . This EMR STEP will run the export job using parameters that you initially passed on the template. Note that it will also RunsOn EMRclusterforBackup resource which you can find in resource section.
s3://dynamodb-emr-#{myDDBRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}
To run export on other DDB tables using same EMR resource, you simply need to create another EMRActivity object by clicking Add and then add EMRActivity on architect. On this activity , you can use the same RunsOn as previous activity is using and in the STEP param's you can manually edit to to include other table name and its export path
like
s3://dynamodb-emr-#{myDDBRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbExport,s3://myexport-bucket/table2/,table2,0.9
You can extend it for multiple tables.
Note: This can easily be done for multiple tables using a JSON file as Data-Pipeline definition , editing it to add more activities and parameters and then exporting it to Run later.