s-function.json needs that variable "customRole": "${myLambdaRole}",
BUT if somebody else get my serverless project via git clone he doesn't get the _meta folder.
Now he calls serverless project init with the same stage and region. That creates the _meta folder BUT it does NOT populate the s-variables-common.json with the Output Variables from s-resources-cf.json.
Now he tries to deploy with serverless dash deploy and that errors
Serverless: WARNING: This variable is not defined: myLambdaRole
Unfortunately even calling serverless resources deploy will not fix the problem because it says
Serverless: Deploying resources to stage "dev" in region "us-east-1" via Cloudformation (~3 minutes)...
Serverless: No resource updates are to be performed.
and the s-variables-common.json is still not populated with the necessary output variables.
What that means is basically that it is impossible to work as a team together at the same stage in the same region with the same resources when sharing the project via Git.
So since we don't want to check in the _meta folder into Git I would suggest that a serverless project init call should make sure that all the Output Variables are properly fetched and populated in the s-variables-common.json.
This is pretty important, or how do you guys share projects via 'Git' ?
There is a plugin called "meta sync" that should solve your problem:
https://github.com/serverless/serverless-meta-sync
Related
tldr; I can deploy a single CFN stack as part of my pipeline, but how do I deploy multiple dynamic stacks?
An even better tldr; How would you do this? forEach BuildStage.artifact invoke CloudFormation.build
I am using CodePipeline in a pretty conventional way. The goal is to source control my CloudFormation templates, push them through the pipeline when a template changes, and then automatically deploy the stack.
Source Stage (CodeCommit commit my CFN yaml templates)
Build Stage (CodeBuild finds the new files from the commit, and pushes them to S3)
Deploy Stage (CloudFormation deploys my templates as CFN stacks)
Almost everything is working great. I commit my template changes to CodeCommit, the build stage runs my codeBuild gatekeeper, which gathers only the files that have changed, and uploads them to S3. So far so good.
The challenge is that sometimes I have one template change, and sometimes I have multiple(n). I can detect changed files and get them up to S3 no problem in my build stage. If I commit a change for one template, everything works fine. I can create an exported variable with my template location on S3, pass that to my deploy stage, and have the CloudFormation deploy action use that file as the template source. But how would I handle this if I have 2 templates?
I can't just create endless exported variables in my build stage.
And if I can, AFAIK there is no way to iterate over each entry for the deploy stage.
My thought is I would need to do one of the following:
Inside of my current buildspec (after I upload the files to S3), use the AWS CLI to invoke a CFN stack build. I can add this as part of a loop, so it iterates on each file to be uploaded. OR
After my build stage, use a Lambda to perform the same as #1. Loop through each file, and then use the CLI or SDK to invoke a CFN stack build.
Both of these options seem to defeat the purpose of the deploy stage altogether, which seems clunky to me.
Are there other options I am missing? What would you do?
Just want to answer my own question, in case anyone else is trying to figure out how to do this.
I ended up going with option 1...just doing a cli CFN deployment directly from within CodeBuild. I was really trying to shoehorn the idea of using a CodePipeline deploy stage, but this works just fine.
If anyone else ends up coming along with a better solution, I am all ears.
I have a CDK based CodePipeline with a 'Deploy' Step for a Lambda function that started to fail recently, but succeeded already multiple times for this branch in the past. The odd thing about it, compared to the deploying fine production branch:
the code changes are minimal and passing ci the unit test and all other steps (2 loc)
the only other change is a changed version number of the deployed application in npm's package.json file
The CDK Stack did not see changes recently.
The failing step is the one where the CI pipeline tries to deploy the lambda based application via cloudformation. When review the error in cloudformation, the Error 'Update to resource type AWS::CodeDeploy::Application is not supported' pops up for the ressource type 'AWS::CodeDeploy::Application which is deployed by a 'CodePipelineAction.S3DeployAction'.
The error seems to be regarding the created deploy stack and not a resource in it.
Update: I got a answer from AWS Support
This is a known issue which you can track on the GitHub CDK issues. I encourage you to add your voice and experience to this issue. The more input we have, the more visibility we have to improve the service. Please see the link below:
https://github.com/aws/aws-cdk/issues/15947
If you are updating this resource's tags, there is a CDK workaround. You can add the Exclude Resources Types within your construct. Below is a sample of how this would look:
const tagOptions = {
excludeResourceTypes: ['AWS::CodeDeploy::Application'],
};
cdk.Tags.of(deployment).add('Name', `buffer-${props.environment}`, tagOptions);
The reason why it broke may be: It seems that this resource type didn't support tags before, and now it does, leading to the issue.
I am using the AWS CLI task to deploy a Lambda layer. The build pipeline upstream looks like this:
It zips up the code, publishes the artifact and then downloads that artifact.
Now in the release pipeline I'm deploying that artifact via an AWS CLI command. The release pipeline looks like this:
I'm trying to figure out a way to dynamically get the current working directory so I don't need to hardcode it. In the options and parameters section you can see I'm trying to use $(Pipeline.Workspace) but it doesn't resolve correctly.
Is this possible?
Correct me if I am wrong, but I looks like you are running this in Azure Release? Not Pipelines?
If that is the case I think the variable you are looking for is $(Release.PrimaryArtifactSourceAlias) .
See the section of the document that talks about release specific variables: https://learn.microsoft.com/en-us/azure/devops/pipelines/release/variables?view=azure-devops&tabs=batch#default-variables---release
Yes. This is completely achievable.
From your screenshot, you are using the Release Pipeline to deploy the Artifacts.
In your situation, the $(Pipeline.Workspace) can only be used in Build Pipeline.
Refer to this doc: Classic release and artifacts variables
You can use the variable: $(System.ArtifactsDirectory) or $(System.DefaultWorkingDirectory)
The directory to which artifacts are downloaded during deployment of a release. The directory is cleared before every deployment if it requires artifacts to be downloaded to the agent. Same as Agent.ReleaseDirectory and System.DefaultWorkingDirectory.
Within our team. We all have our own dev project, and then we have a test and prod environment.
We are currently in the process of migrating from deployment manager, and gcloud cli. Into terraform. however we havent been able to figure out a way to create isolated backends within gcs backend. We have noticed that the remote backends support setting a dedicated workspace but we havent been able to setup something similar within gcs.
Is it possible to state that terraform resource A, will have a configurable backend, that we can adjust per project, or is the equivalent possible with workspaces?
So that we can use either tfvars, and vars parameters to switch between projects?
As stands everytime we attempt to make the backend configurable through vars, we get the error in terraform init of
Error: Variables not allowed
How does one go about creating isolated backends for each project.
Or if that isn't possible how can we guarantee that with multiple projects a shared backend state will not collide causing the state to be incorrect?
Your backend must been known when you run your terraform init command, I mean your backend bucket.
If you don't want to use workspace, you have to customize the backend value before running the init. We are use make to achieve this. According to the environment, make create a backend.tf file with the correct backend name. And run the init command.
EDIT 1
We have this piece of script (sh) which create the backend before triggering the terraform command. (it's our Make file that do this)
cat > $TF_export_dir/backend.tf << EOF
terraform {
backend "gcs" {
bucket = "$TF_subsidiary-$TF_environment-$TF_deployed_application_code-gcs-tfstatebackend"
prefix = "terraform/state"
}
}
EOF
Of course the bucket name pattern is dependent of our project. The $TF_environment is the most important because according to the env var set, the bucket reached will be different.
When I trigger via Jenkins (code deploy plugin), I get the following error -
No such file or directory - /opt/codedeploy-agent/deployment-root/edbe4bd2-3999-4820-b782-42d8aceb18e6/d-8C01LCBMG/deployment-archive/appspec.yml
However, if I trigger deployment into the same deployment group via code deploy directly, and specify the same zip in S3 (obtained via Jenkins trigger), this step passes.
What does this mean, and how do I find a workaround to this? I am currently working on integrating a few things and so, will need to deploy via code deploy and via Jenkins simultaneously. I will run the code deploy triggered deployment when I will need to ensure that the smaller unit is functioning well.
Update
Just mentioning another point, in case it applies. I was previously using a different codedeploy "application" and "deployment group" on the same ec2 instances, and deplying using jenkins and code deploy directly as well. In order to fix some issue (not allowing to overwrite existing files due to failed deployments, allegedly), I had deleted everything inside the /opt/codedeploy-agent/deployment-root/<directory containing deployments> directory, trying to follow what was mentioned in this answer. However, note that I deleted only items inside that directory. Thereafter, I started getting this error appspec.yml not found in deployment archive. So, then I created a new application and deployment group and since then, I am working on it.
So, another point to consider is whether I should do some further cleanup, if the jenkins triggered deployment is somehow still affected by those deletions (even though it is referring to the new application and deployment group).
As part of its process, CodeDeploy needs to reference previous deployments for Redeployments and Deployment Rollbacks operations. These references are maintained outside of the deployment archive folders. If you delete these archives manually as you indicate, then a CodeDeploy install can get fatally corrupted: the references left to previous deployments are no longer correct or consistent, and deploys will fail.
The best thing at this point is to remove the old installation completely, and re-install. This will allow the code deploy agent to work correctly again.
I have learned the hard way not to remove/modify any of the CodeDeploy install folders or files manually. Even if you change apps or deployment groups, CodeDeploy will figure it out itself, without the need for any manual cleanup.
In order to do a deployment, the bundle needs to contain a appspec.yml file, and the file needs to be put at the top directory. Seems the error message is due to the host agent can't find the appspec.yml file.