Github Actions to mirror and sync with AWS codecommit - amazon-web-services

I am planning to Synchronize a repo in GitHub and to AWS codecommit. All the present code and future PR's merging to main, dev, and preprod should be in the AWS codecommit. I am referring to GitHub Actions and I see three different wiki/documentation. I am not sure which one to follow?
1.https://github.com/marketplace/actions/github-to-aws-codecommit-sync
2.https://github.com/marketplace/actions/mirroring-repository
3.https://github.com/marketplace/actions/automatic-repository-mirror

The first one (actions/github-to-aws-codecommit-sync) should be enough.
Its script entrypoint.sh does a:
git config --global credential.'https://git-codecommit.*.amazonaws.com'.helper '!aws codecommit credential-helper $#'
git remote add sync ${CodeCommitUrl}
git push sync --mirror
That should pull all branches, including PR branches (in refs/pull/ namespace)
That action should be called on merged PR:
name: OnMergedPR
on:
push:
branches:
- "**"
- "!main"
pull_request:
branches:
- main
types: [opened, synchronize, closed]
workflow_dispatch:
jobs:
build:
if: (!(github.event.action == 'closed' && github.event.pull_request.merged != true))
...

Related

How to deploy 2 CodeCommit repositories using CodeDeploy and CodePipeline with the same appspec.yml?

I have 3 CodeCommit repositories:
Repo 1: App A files
Repo 2: App B files
Repo 3: Some config files for apps A and B + appspec.yml
I would like to create 2 CodePipelines deploying my apps on an EC2. The first one taking Repo1 and Repo3 and then the second one taking Repo2 and Repo3. I want to use the same appspec.yml (because Repo 1 and 2 have the same tree structure and I don't want to duplicate appspec.yml and common config files from Repo3)
It seems like it's not possible to have 2 sources in CodePipeline if the next stage is CodeDeploy. So I decided to put Repo 3 as source and use a BeforeInstall script to git clone Repo 1 or Repo2 depending on the deployment group.
So my appspec.yml looks like that:
version: 0.0
os: linux
files:
- source: config
destination: /var/lib/app
hooks:
BeforeInstall:
- location: scripts/clone-repository.sh
And then clone-repository.sh looks like so
yum install -y git
if [ "$DEPLOYMENT_GROUP_NAME" == "group1" ]
then
git clone <Repo 1>
mv to the right place
etc
elif [ "$DEPLOYMENT_GROUP_NAME" == "group2" ]
then
git clone <Repo 2>
mv to the right place
etc
I was forced to install git otherwise I got an error. I also tried to add these lines
git config --global credential.helper '!aws codecommit credential-helper $#'
git config --global credential.UseHttpPath true
The CodeDeploy role has AWSCodeCommitPowerUser and AWSCodeDeployRole but impossible to git clone. I get the following error -> error: [stderr]: unable to access '': the requested URL returned error: 403
What's missing?
If you have another idea to solve my issue, I'll take it!
Thank you for you help!

AWS CDK CodePipeline deploying app and CDK

I'm using the AWS CDK with typescript and I'd like to automate my CDK and Code Package deployments.
I have 2 github repos: app-cdk and app-website.
I have setup a CodePipeline as follows:
const pipeline = new CodePipeline(this, 'MyAppPipeline', {
pipelineName: 'MyAppPipeline',
synth: new ShellStep('Synth', {
input: CodePipelineSource.gitHub(`${ORG_NAME}/app-cdk`, BRANCH_NAME, {
}),
commands: ['npm ci', 'npm run build', 'npx cdk synth']
})
});
and added a beta stage as follows
pipeline.addStage(new MyAppStage(this, 'Beta', {
env: {account: 'XXXXXXXXX', region: 'us-east-2' }
}))
This works fine when I push code to my CDK code package, and deploys new resources. How can I add my website repo as a source to kickoff this pipeline, build in a different manner, and deploy the assets to the necessary resources? Shouldn't that be a part of the CodePipeline's source and build stages?
I have encountered similar scenario, where I had to create a CDK Pipeline for multiple Static S3 sites in a repository.
Soon, It became evident, that this had to be done using two stacks as Pipeline requires step to be of type Stage and does not support Construct.
Whereas my Static S3 Websites was a construct (BucketDeployment).
The way in which I handled this integration is as follows
deployment_code_build = cb.Project(self, 'PartnerS3deployment',
project_name='PartnerStaticS3deployment',
source=cb.Source.git_hub(owner='<github-org>',
repo='<repo-name>', clone_depth=1,
webhook_filters=[
cb.FilterGroup.in_event_of(
cb.EventAction.PUSH).and_branch_is(
branch_name="main")]),
environment=cb.BuildEnvironment(
build_image=cb.LinuxBuildImage.STANDARD_5_0
))
This added/provisioned a Codebuild Project which would dynamically deploy the changesets of cdk ls
The above Codebuild Project will need a buildspecfile in your root of the repo with the following code (for reference)
version: 0.2
phases:
install:
commands:
- echo Entered in install phase...
- npm install -g aws-cdk
- cdk --version
build:
commands:
- pwd
- cd cdk_pipeline_static_websites
- ls -lah
- python -m pip install -r requirements.txt
- nohup ./parallel_deploy.sh & echo $! > pidfile && wait $(cat pidfile)
finally:
- echo Build completed on `date`
The contents of parallel_deploy.sh are as follows
#!/bin/bash
for stack in $(cdk list);
do
cdk deploy $stack --require-approval=never &
done;
While this works great, There has to be a simpler alternative which can directly import other stacks/constructs in the CDK Pipeline class.

Terraform and Gitlab CI pipeline for multiple projects

We have an existing Gitlab CI EE pipeline for Terraform that works, one env at a time, each of them in a different AWS account.
However, we want to be able to scale pipeline for different teams which uses TF for their IaC requirements.
Is there a way to do it? different IaC repo/branch per team with same pipeline to handle TF deployments, with S3 as backend?
Any other way to do it? We have Gitlab CI EE and Terraform opensource edition.
I did something similar.
I had multiple repositories for each terraform module and a main repo that calls each module You can check how to do it here, in this main repository I had something like this:
- dev/
- main.tf
- variables.tf
- dev.tfvars
- backend.tf
- test/
- prd/
- .gitlab-ci.yml
Each environment had its own folder/files with the s3 and variables necessary to run. The main file is where all the other repos are called. The pipeline is triggered when it detects changes in the paths and it is initialized on the environment depending on the commit branch:
include:
- template: Terraform/Base.latest.gitlab-ci.yml
before_script:
- echo "${CI_COMMIT_BRANCH}"
- mkdir -p ~/.ssh
- touch ~/.ssh/known_hosts
- echo "$SSH_PUBLIC_KEY" > ~/.ssh/id_rsa.pub #this is to allow the main repository to get the information from the other ones, all of them are private
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- echo "$KNOWN_HOSTS" > ~/.ssh/known_hosts
- terraform --version
stages:
- init
init:
stage: init
environment: $CI_COMMIT_BRANCH
script:
- terraform init # just an example
rules:
- if: $CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "tst" || $CI_COMMIT_BRANCH == "prd"
changes:
- dev/*
- tst/*
- prd/*
I must say this is not the best way to do it, there are some "security" points to mention but they can be solved with a little ingenuity such as: I have understood that the backend.tf shouldn't be explicit in each folder neither the .tfvars file. Someone told me that using terraform enterprise these issues could be solved. Another "dirty thing" about this is that there's kind of duplicated code because each environment folder contains the same main file, outputs and variables.
So far, the way I did it works. I hope this can give you an idea :)
Good luck.

AWS Codepipeline with bitbucket and how to pass branch name to appspec.yaml

I've created a code pipeline for the PHP laravel base project with bitbucket. Passing parameter using AWS SSM to the appspec.yml All are working fine with the development branch. I need to update the parameters from the AWS SSM based on the branch name on appspec.yml file.
FOR DEV
Branch name: develop
parameter value: BRANCH_NAME_VALUE (develop_value)
FOR QA
Branch name: qa
parameter value: BRANCH_NAME_VALUE(qa_value)
appspec.yaml file
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/
overwrite: true
hooks:
BeforeInstall:
- location: scripts/before_install.sh
timeout: 300
runas: root
AfterInstall:
- location: scripts/after_install.sh
timeout: 300
runas: root
How I can get the BRANCH_NAME for update the after_install.sh
Not sure what do you want to do, but you can't pass arbitrary env variables to CodeDeploy. The only supported ones are:
LIFECYCLE_EVENT : This variable contains the name of the lifecycle event associated with the script.
DEPLOYMENT_ID : This variables contains the deployment ID of the current deployment.
APPLICATION_NAME : This variable contains the name of the application being deployed. This is the name the user sets in the console or AWS CLI.
DEPLOYMENT_GROUP_NAME : This variable contains the name of the deployment group. A deployment group is a set of instances associated with an application that you target for a deployment.
DEPLOYMENT_GROUP_ID : This variable contains the ID of the deployment group in AWS CodeDeploy that corresponds to the current deployment
Thus, in your case you could have two development groups called develop and qa. Then, in the CodeDeploy scripts you could test the branch name using
DEPLOYMENT_GROUP_NAME and get respective SSM parameters.
Seems that you are trying to merge branches, and are facing issues where specific files or directories change per branch. I faced similar issue, and we can try to create .gitattributes per branch. The destination branch will have this so that once it is merged the specific files in the source branch wont overwrite the destination branch.
Check Reference:-
List item
https://git-scm.com/book/en/v2/Customizing-Git-Git-Attributes#_merge_strategies
List item
Git - Ignore files during merge
Example:-
2 branches master (For Production Environment)
and Stage (For Development Environment)
git config --global merge.ours.driver true
git checkout master
echo "appspec.yml merge=ours" >> .gitattributes
echo "scripts/before-install.sh merge=ours" >> .gitattributes
git merge stage
$ cat .gitattributes
appspec.yml merge=ours
scripts/before-install.sh merge=ours
Summary:-
so the idea is to keep the appspec.yml clean and environment free and handle it at git level itself. Unfortunately appspec.yml does not still support variables to accommodate per Branch.
Additionally, I would also add the above paths to .gitignore per branch to avoid them being altered during commits. Above is just a example, in production setup you could by default disable commits to master branch and only use pull requests with manual approval at AWS CodePipeline level with SNS topics for approval emails. And use a feature branch and merge to Stage first.

github pages issue when using github actions and github-pages-deploy-action?

I have simple github repo where I host the content of my CV. I use hackmyresume to generate the index.html. I'm using Github Actions to run the npm build and it should publish the generated content to the gh-pages branch.
My workflow file has
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
And the build command is
"build": "hackmyresume BUILD ./src/main/resources/json/fresh/resume.json target/index.html -t compact",
I can see the generated html file getting committed to the github branch
https://github.com/emeraldjava/emeraldjava/blob/gh-pages/index.html
but the gh-page doesn't pick this up? I get a 404 error when i hit
https://emeraldjava.github.io/emeraldjava/
I believe my repo setting and secrets are correct but I must be missing something small. Any help would be appreciated.
This is happening because of your use of the GITHUB_TOKEN variable. There's an open issue with GitHub due to the fact that the built in token doesn't trigger the GitHub Pages deploy job. This means you'll see the files get committed correctly, but they won't be visible.
To get around this you can use a GitHub access token. You can learn how to generate one here. It needs to be correctly scoped so it has permission to push to a public repository. You'd store this token in your repository's Settings > Secrets menu (Call it something like ACCESS_TOKEN), and then reference it in your configuration like so:
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
ACCESS_TOKEN: ${{ secrets.ACCESS_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
You can find an outline of these variables here. Using an access token will allow the GitHub Pages job to trigger when a new deployment is made. I hope that helps!