Does Argo CD automated sync policy work with helm repository? - argocd

Does Argo CD automated sync policy work with helm repository?
Checking documentation https://argo-cd-docs.readthedocs.io/en/latest/user-guide/auto_sync/
it mentions only Git repository. But in project settings one can set automated sync policy even for helm repository.
What I expect for automated sync for helm repositories: Application should be synced once new, higher version of helm chart appears in Helm repository.

ArgoCD supports Helm repositories just fine for sync purposes. However it doesn't check for "newer" versions as you say.
You might want to look at https://argocd-image-updater.readthedocs.io/en/stable/ which does the same thing for container images (And given OCI support, maybe at some point will also do what you ask)

Related

AWS EFS too slow when i use git & npm install

I'm using aws S3 storage and cloudfront for my dozens of static web sites.
Also i'm using aws lambda with nodejs and EFS for git, node_modules and build cache files.
When i try git clone, npm install and npm run build from EFS it's working to slow.
But when i try from lambda /tmp folder it's working x10 faster than the EFS storage.
I need a storage like EFS because i storage dozens of web sites git, node package and cache files. So how can i increase EFS performance.
If you have used the standard settings for EFS you will be utilizing burstable credits which are depleted the more file changes you make.
Depending on the file size and the number of changes on the EFS mount you nay be depleting the available credits which would provide performance problems for any application attached to the EFS mount. You can detect this by looking at the BurstCreditBalance CloudWatch metric, also keep an eye for any flatlining of TotalIOBytes as this might suggest it has reached its maximum throughput.
When you perform the git clone you could also use the --depth with a value of 1 to create a shallow clone. This option will get only the latest commit, as opposed to cloning the entire git history too.
A improvement to this workflow I would suggest reconsidering to use the following technologies to provide the workflow for what you want. Rather than a Lambda function create a CodePipeline pipeline that will trigger a CodeBuild job. This CodeBuild job would be responsible for running the npm install task for you as well as any other actions.
Part of CodePipeline's flow is that it will store the legacy artifact in S3 along the way, so that you have a copy of it. The CodePipeline can also deploy to your S3 bucket as well at the end.
A couple of links that might be useful for you:
AWS EFS Performance
Tutorial: Create a pipeline that uses Amazon S3 as a deployment provider

How to integrate Azure Repo with AWS CodeCommit

I want to implement CI/CD in AWS CodeCommit.
I know its possible manually to kickstart the process once the code reached CodeCommit. But I am using Azure DevOps Repo as my source code repo and want to automate the process.
The deployement is done using AWS SAM. I am looking for a method like; when I push a code to Azure Repo , it should reach the AWS CodeCommit and do the CI/CD without any further manual intervention.
Is there any way to do that?
Azure repos and CodeCommit are compliant with the git standard. The git standard allows you to specify multiple remotes. This is useful for if you were maintaining a mirror or, as in your use case, you need to do something in different environments.
You can read about setting multiple remotes here (provided by github; even though you’re not using github the process and commands should be the same).
Once you have your multiple remotes setup, you can configure your CI/CD pipeline to kick off its process to deploy your SAM template based on your push; when you push your code changes they will be sent to both your Azure repo, and your CodeCommit repo, and your CI/CD pipeline that is monitoring your CodeCommit repo will see the change and kickoff its execution.
Its worth pointing out that you’ll need to properly setup and configure your CI/CD pipeline. AWS provides a number of services to support this including AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy.

Building a container with gcloud, kubectl and python3.6

I want my deployment pipeline to run a python 3.6 script on my GKE hosted database.
For that, locally, I use kubectl port-forward then run the script successfully.
However, to run it in my pipeline I need to start a container that will support both GKE access and python3.6
To run python3.6 I'm using the image python:3.6
To run gcloud and kubectl I'm using the image google/cloud-sdk:latest
However, gcloud is using python2, hence making it very difficult for me to orchestrate a container that will include all of these tools.
For reference, I'm using Bitbucket Pipelines. Might be able to solve it with the services feature, but currently its too complicated since I need to run many commands on both potential containers.

AWS CD with CodeDeploy for Docker Images

I have a scenario and looking for feedback and best approaches. We create and build our Docker Images using Azure Devops (VSTS) and push those images to our AWS repository. Now I can deploy those images just fine manually but would like to automate the process in a continual deployment model. Is there an approach to use codepipeline with a build step to just create and zip the imagesdefinitions.json file before it goes to the deploy step?
Or is there an better alternative that I am overlooking.
Thanks!
You can definitely use a build step (eg. CodeBuild) to automate generating your imagedefinitions.json file, there's an example here.
You might also want to look at the recently announced CodeDeploy ECS deployment option. It works a little differently to the ECS deployment action but allows blue/green deployments via CodeDeploy. There's more information in the announcement and blog post.

Automate code deploy from Git lab to AWS EC2 instance

We're building an application for which we are using GitLab repository. Manual deployment of code to the test server which is Amazon AWS EC2 instance is tedious, I'm planning to automate deployment process, such that when we commit code, it should reflect in the test instance.
from my knowledge we can use AWS code-deploy service to fetch the code from GitHub. But code deploy service does not support GitLab repository . Is there a way to automate the code deployment process to AWS Ec2 instance through GitLab. or Is there a shell scripting possibility to achieve this? Kindly educate me.
One way you could achieve this with AWS CodeDeploy is by using the S3 option in conjunction with Gitlab-CI: http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-w.html
Depending on how your project is setup, you may have the possibility to generate a distribution Zip (Gradle offers this through the application plugin). You may need to generate your "distribution" file manually if your project does not offer such a capability.
Gitlab does not offer a direct S3 integration, however through the gitlab-ci.yml you would be able to download it into the container and run the necessary upload commands to put the generated zip file on the S3 container as per the AWS instructions to trigger the deployment.
Here is an example of what your brefore-script could look like in the gitlab-ci.yml file:
before_script:
- apt-get update --quiet --yes
- apt-get --quiet install --yes python
- pip install -U pip
- pip install awscli
The AWS tutorial on how to use CodeDeploy with S3 is very detailed, so I will skip attempting to reproduce the contents here.
In regards to the actual deployment commands and actions that you are currently performing manually, AWS CodeDeploy provides the capability to run certain actions through scripts defined in the app-spec file depending on event hooks for the application:
http://docs.aws.amazon.com/codedeploy/latest/userguide/writing-app-spec.html
http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref-hooks.html
I hope this helps.
This is one of my old post. But I happened to find an answer for this. Although my question is specific to work with code deploy I would say there is no such need to use any aws requirements using gitlab.
We don't require Code Deploy at all. There is no need to use any external CI server like the team city or the jenkins to perform the CI from the GitLab anymore.
We need to add the .gitlab-ci.yml file in the source directory of the branch and write an .yml script in it. There are pipelines in the GitLab that will perform the CI/CD automatically.
The pipelines of the GitLab CI/CD looks more similar to the working functionality of Jenkins Server. using the YML script we can perform SSH on the EC2 instance and place the files in it.
An example of how to write the gitlab .yml file to ssh to ec2 instance is here https://docs.gitlab.com/ee/ci/yaml/README.html