I'm trying to get this repo going here - https://github.com/mydatastack/google-analytics-to-s3.
A link is provided to launch the AWS CloudFormation stack but it is no longer working as the S3 bucket containing the template is no longer active.
I have 2 questions about getting data pipeline running:
My first question would be what is 631216aef6ab2824fc63572d1d3d5e6c.template and can I create it through the 3 .yml files in the CloudFormation folder?
I've tried to create a template through CloudFormation designer , collector-ga.yml but it fails. I think its because the Resources within the yml aren't available when creating a template just from collector-ga. I've also tried uploading the repo to s3 and creating a template from there but that was also unsuccessful.
How can I launch the stack from the repo? I've found very little information online so an explanation or a pointer to some relevant resources would be appreciated.
This repository doesn't use the "standard" CloudFormation resources, but it uses AWS SAM. You'll have to install the SAM CLI tool and use that to deploy the CloudFormation stack. If you run sam deploy --guided it will help you with the setup of the necessary S3 bucket etc on your AWS account. SAM will upload the necessary files, resolve the internal local links between the templates by updating them with the S3 URLs and construct a packaged.yml template which it will use to deploy the stack.
Also, check out the AWS SAM user guide for more information.
Related
I'm fairly new to AWS and using the CDK but have been working on a project which deploys to AWS via a pipeline using yaml for the cf-template and later a node script to run cdk deploy on a set of stack files that have been written in Typescript.
In the cf-template yaml where the cdk-toolkit is being defined there's a bucket resource with name X. After the toolkit has been created/updated in the pipeline, the cdk deploy command is executed to deploy some stacks and workers, which should live in bucket X. They aren't automatically being uploaded there however, so I've tried using the --parameters flag to specify X as below.
cdk deploy --toolkit-stack-name my-toolkit --parameters uploadBucketName=X --ci --require-approval never
When I do this I get the following error in the pipeline for the first stack that gets deployed:
Parameters: [uploadBucketName] do not exist in the template
I assumed this meant that in the MyFirstStack.ts file it was missing a parameter definition as suggested by the AWS documentation, but it's not clear to me why this is necessary or how it's supposed to be used when it's the cdk deploy command which provides a value for this parameter. I tried adding it per the docs
const uploadBucketName = new CfnParameter(this, "uploadBucketName", {
type: "String",
description: "The name of the Amazon S3 bucket where uploaded files will be stored."});
but not sure if this is really the right thing to do, and it doesn't work besides - I still get the same error.
Does anyone have any ideas where I'm going wrong?
I need to modify the CloudFormation template : Video on Demand on AWS CloudFormation template
When I deploy the main template without any modifications and upload video in the source S3 bucket, then the folders that are getting created in the destination S3 bucket are having their names as guid of Dynamodb item as shown in the below picture,
In my case, the requirement is that those folders in the destination S3 bucket should get created with some meaningful names.
To resolve this issue, where exactly do I need to modify the template
Steps for modifying a CloudFormation stack can be found at:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-get-template.html
In general, Stacks support passed parameters of the form:
aws cloudformation create-stack --stack-name mystack
--template-body file://mypath/mystack.json
--parameters ParameterKey=KeyPairName
So one of the passed parameters could be the desired output filename.
Regarding file naming within MediaConvert, the service supports a set of time+date variables for naming output files, which can found at: https://docs.aws.amazon.com/mediaconvert/latest/ug/list-of-settings-variables-with-examples.html
Alternatively, you could rename the files after output using a Lambda Function triggered by S3 File Events. This would allow you to generate or retrieve a name conducive to your workflows, as well as set Tags and other object metadata.
Examples of this workflow are documented here: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
I hope this information helps you tune your workflow. Good luck!
I am trying to build a pipeline in gitlab which is using their provided gl-cloudformation Template to deploy Infrastructure to aws:
https://gitlab.com/gitlab-org/cloud-deploy/-/blob/master/aws/src/bin/gl-cloudformation
I am running into a problem with creating IAM roles since cloudformation is needing extra confirmation to deploy stacks which create IAM resources. Normally i would just run aws create-stack --capabilities CAPABILITY_NAMED_IAM but since i am useing their template i can't.
Anyone got any experience with running gitlab and cloudformation?
This is not possible with that image. You must either use different CI Image or do not add custom name for the IAM resoucre (let the AWS generate name).
I am using CloudFormation with SAM to deploy a stack which contains:
S3 Bucket
Cognito
AWS::Serverless::Api
AWS::Serverless::Function (authorizers + microservices, Type: Api and endpoints of the API Gateway)
Log Groups
To deploy my stack, I first run aws cloudformation package to package the lambda and then run aws cloudformation deploy to deploy the generated stack. This is working.
My goal now is to be able to update a microservice without deploying the entire stack (not building authorizers and other microservices), similar to serverless deploy function in the Serverless framework. This should preferably be one reusable template that uses a macro or just replaces text in the file.
The problem I am facing with this:
Running aws lambda update-function-code requires the lambda to be redeployed
To redeploy the lambda I have to declare AWS::Serverless::Function. For the function to be part of the API Gateway, AWS::Serverless::Api must be declared as well.
Declaring AWS::Serverless::Api requires all the other functions to be defined or they will be removed from the API Gateway.
I feel like I am stuck here and have not found other options of achieving my goal.
Since you're using SAM, I'd recommend deploying and updating your application using the sam cli commands.
You can run
sam build
sam package
sam deploy
When you run sam deploy, it deploys your application, but all subsequent sam deploy commands will update your existing cloudformation stack with only the appropriate resources that need updating.
If you opt for keeping with the standard Cloudformation cli commands, you could use the aws cloudformation update-stack command so that you're not re-deploying an entire new stack.
I am using terraform to create all the infra(CodePipeline, lambda, buckets) on AWS
currently, I've created a pipeline that builds the source zip file and puts it on s3 bucket but the lambda still keeps using the older source. So, I update the URL manually in the AWS console and it works.
Now I want to automate the flow but available solutions are:
AWS SAM + CFT
Codebuild Stage to update the source using AWS CLI
Create a lambda that updates the source
Code Deploy + AWS SAM + CFT
I am not willing to use CFT at all since all of our code is in terraform and CFT requires me to create new lambdas instead of using old ones.
is there any other simpler way to update the lambda source through Codepipeline
The preferred way to deploy a Lambda via CodePipeline is using a CloudFormation Deploy action [1]. Since you are not looking to use CloudFormation, next option could be to run your terraform plan/apply commands from within a CodeBuild job that is part of the pipeline. You will need to provide the CodeBuild role required permission for resource creation (or export the credentials in Environment variabels for TF to use via this [2] method) and install the TF binary within install phase of buildspec.
Ref:
[1] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
[2] How to retrieve Secret Manager data in buildspec.yaml