AWS copilot fails to create pipeline - amazon-web-services

I have a repo for all my docker stuff.
I would like to store copilot configs here as well, instead of adding a new copilot/ directory to the repo of every micro service.
As far as I know this should be possible.
So now I have one single copilot dir in a separate repo which looks like this:
copilot
.workspace
...
- some-service
- mainfest.yml
- other-service
- manifest.yml
etc. This works, I can add more services and I can deploy them.
However I tried to create a pipeline and that failed. According to the docs the pipeline should be able to handle multiple services, but I don't understand how.
I can run
copilot pipeline init
then I pushed the resulting files to my repo.
Then I tried:
copilot pipeline update
But this returns an error:
ACTION REQUIRED! Go to https://console.aws.amazon.com/codesuite/settings/connections to update the status of connection xy-user-service from PENDING to AVAILABLE.
✘ Failed to create a new pipeline: pipeline-myApp-user-service.
✘ create pipeline: check if changeset is empty: create change set copilot-51ef519a-711b-4126-bfbd-3d618ef824a5 for stack pipeline-myApp-user-service: ValidationError: Template format error: Unrecognized resource types: [AWS::CodeStarConnections::Connection]
status code: 400, request id: 8a87f62a-ae14-4fe3-9a3e-8b965d2af794: describe change set copilot-51ef519a-711b-4126-bfbd-3d618ef824a5 for stack pipeline-myApp-user-service: ValidationError: Stack [pipeline-myApp-user-service] does not exist
status code: 400, request id: 44927d7e-2514-466a-94ff-51e932042737
The xy-user-service connection didn't exist. The list of connections is empty. I tried to create it, linking my Bitbucket to AWS. But the error is still there..
What am I doing wrong?
Am I supposed to run copilot app init in the root dir of each and every micro service (they are in separate repos) and then should I create a separate pipeline for each?
Is it not possible to just store copilot configs in one place?
Thanks in advance!

It sounds like you have one repo with all of your configs, and separate repos for each service's source code.
If this is the case, the pipeline is not working because it needs the url to the source repo, not the repo with the config files; this is how it can detect changes to the source code that trigger the pipeline.
You CAN have multiple services in one Copilot app, but if you want a single pipeline for the entire app, the microservices' source code needs to be in one repository.
See:
https://aws.github.io/copilot-cli/docs/concepts/applications/
https://aws.github.io/copilot-cli/docs/concepts/services/
https://aws.github.io/copilot-cli/docs/concepts/pipelines/
(Cross-posted from https://github.com/aws/copilot-cli/issues/2109; we primarily use GitHub Issues for Q&As.)
Thanks so much!

Related

Automate Apigee API Proxy Deployment in CI/CD Pipeline

Is there any recommended method to create and deploy the Apigee API Proxy Bundle via a CI/CD pipeline (I'm using Azure DevOps)?
I want to avoid excessive API Proxy Bundles from being created and deployed when there are no changes to be made. I've already tested, and I see that identical bundles still create a new revision.
So far, my own solution is to write a PowerShell script to use apigeecli to download the current bundle and compare it against the apiproxy that I have locally in my repo. If it differs, I create and deploy a new API Proxy Bundle.
Has anyone seen anything better?
I have mainly automated with Gitlab but will share my ideas probably may help with your specific case.
So we use version control to manage our apigee repos. I have setup a gitlab pipeline that checks for the diff anytime we push to our repository and only if there are any changes do we redeploy the proxy to Apigee. Normally when the pipeline is triggered, we check if there are any changes to target servers, proxies and shared flows, and if changes are detected, we check the deployed revision and environments.
Through my deployment script, i am able to get a list of these changes and pass them to the pipeline as CHANGES variable. This means that only these modified proxies will be deployed.
On my pipeline I could do something like this git diff --name-only $CI_COMMIT_SHA..$CI_COMMIT_BEFORE_SHA > /changes.txt and pass the content of the changes file to the CHANGES to be deployed.

Azure DevOps Pipeline: AWS CLI Task, How to reference predefined variables

I am using the AWS CLI task to deploy a Lambda layer. The build pipeline upstream looks like this:
It zips up the code, publishes the artifact and then downloads that artifact.
Now in the release pipeline I'm deploying that artifact via an AWS CLI command. The release pipeline looks like this:
I'm trying to figure out a way to dynamically get the current working directory so I don't need to hardcode it. In the options and parameters section you can see I'm trying to use $(Pipeline.Workspace) but it doesn't resolve correctly.
Is this possible?
Correct me if I am wrong, but I looks like you are running this in Azure Release? Not Pipelines?
If that is the case I think the variable you are looking for is $(Release.PrimaryArtifactSourceAlias) .
See the section of the document that talks about release specific variables: https://learn.microsoft.com/en-us/azure/devops/pipelines/release/variables?view=azure-devops&tabs=batch#default-variables---release
Yes. This is completely achievable.
From your screenshot, you are using the Release Pipeline to deploy the Artifacts.
In your situation, the $(Pipeline.Workspace) can only be used in Build Pipeline.
Refer to this doc: Classic release and artifacts variables
You can use the variable: $(System.ArtifactsDirectory) or $(System.DefaultWorkingDirectory)
The directory to which artifacts are downloaded during deployment of a release. The directory is cleared before every deployment if it requires artifacts to be downloaded to the agent. Same as Agent.ReleaseDirectory and System.DefaultWorkingDirectory.

Trigger Gitlab CI/CD pipeline to deploy specific part of the repository

I have a repository on GitLab with a directory structure similar to this:
folder-a\
-python-a.py\
folder-b\
-python-b.py
I am trying to set up a CI/CD pipeline on gitlab that will detect changes made to the python code, and deploy them to a production server. What I have currently is the user have to trigger the pipeline manually, and input in the folder name as a variable, which will then cause the pipeline to "cd" into the folder and deploy the code inside the folder.
Is there any configuration or settings that can be added to the pipeline so whenever a Merge Request is merged to the main branch, the pipeline triggers and detects which code was changed, and then deploy the respective code without having the user to manually trigger it and inputting the folder name as a variable?
You might be able to use only:changes / except:changes to do that.
You can have two jobs. One job that goes to folder-a if something under folder-a/* has changed and the other job goes to folder-b if something under folder-b/* has changed.

Aws Code pipeline is failing at Deployment stage by timing out

I am trying to work my way to have a ci/cd for the Api part of the application.
I have 3 steps:
1: Source (git hub version2)
2: Build (currently has no commands)
3: Deploy(provider is code deploy(application))
Here is the screenshot of the events in code deploy.
.
While creating the Deployment Group. I chose the option of downloading the code deploy provider from the option(though it was necessary).
While setting up the code pipeline chose
Felt that was appropriate.
This code pipeline has put an object into the S3 bucket for this pipeline.
Code deploy is acting on that source artifact.
Note:
We have nothing on this Ec2 image it's just a place where we have our API.
Currently, Ec2 is empty.
What would be the proper way to implement this? How can I overcome the issues I am facing.
Without appspec.yml your deployment will fail. From docs:
An AppSpec file must be a YAML-formatted file named appspec.yml and it must be placed in the root of the directory structure of an application's source code. Otherwise, deployments fail.

How can I create a pipeline as code in AWS codepipeline

I am using AWS codepipeline as my CI/CD tool. I have a code pipeline template yml file on my git and I wonder how I can link the file to AWS codepipeline. What I want to do is to let codepipeline to create/update the pipeline based on my pipeline yml file in github.
I have searched and tried on AWS console. All I can do is to manually create a pipeline via console and upload the template file. It works but it is not pipeline as code. If I want to change the stages in the pipeline, I will have to manually update the pipeline on AWS console or via cloudformation command.
Let me give an example, if I need to add a new stage in my pipeline. What I'd like to do is to update the yml file in github repo and commit it, then AWS codepipeline reads this yml file to update itself. I don't want to manually update the stage via AWS console.
Is there a way for me to sync the codepipeline to my pipeline yml file under source control?
I have seen lot of people wondering about this setup where everything is managed via code and I personally use this too with CodePipeline. I can see many people have replied but let me put it here with detials so that it can be help to anyone who wants to do this.
There are two ways to achieve this and let me try to explain both option here:
Option:1
Create two Seperate Pipeline:
"Pipeline -1" (Responsible for config change like adding extra stages to main pipeline "Pipeline -2", with two stage source and deploy (CloudFormation)
source_Config (gitrepo_config) --> deploy_Config_Cfn
"Pipeline -2" (Actual deployment Pipeline with stages like source, buid, deploy stage which will be created by using resource.yaml)
source_Resource (gitrepo_resource) --> build_Resource --> Deploy_Resource
Based on above config upload the template you use to create the main pipeline "resource.yaml" to repo "gitrepo_config".
Upload all the Code in repo "gitrepo_resource" based on the deployment provide you are using for "Deploy_Resource"
Once above setup is done when you want to put extra stages in pipeline you can make changes in file "resource.yaml" in git repo and "Pipeline -1" will do the rest.
Option:2 (Little Complex But let me see if I can explain)
I was using option 1 until I came up with this option.
This second way is like 100% code because even in above option I have to create the "Pipeline -1" either manually or via CFN for first time and later for update also I need to go to console.
To overcome this we can include both Pipeline in same CloudFormation template "resource.yaml" and only one time we have to execute that CloudFormation stack and later everything else is automatic.
I hope this will be helpful to everyone.
Note: Also we have to keep in mind in both option if during any config change if pipeline execution is in progress for resource pipeline "Pipeline -2 " then it might be marked as failed so to overcome this issue you can always set additional trigger which will trigger the "Pipeline -2" based on success state of "Pipeline -1" in addition to the source code trigger.