Connecting an app in ArgoCD to use a Helm OCI repository - amazon-ecr

I can see the Argo seems to support OCI repositories but I can't seem to get this to work.
First I can only seem to add repositories through the CLI because there is no option for enabling OCI.
argocd repo add <uri> --type helm --name name --enable-oci
However, when adding an app using the UI the argo server is logging "unsupported protocol scheme ''" when selecting the repository. I have tried a URI with HTTPS and empty (as mentioned in the issues).
Is it possible to use the UI for OCI repositories or is it a command line thing only?
I am using Argo version 2.0.4

I used the following command and it worked for me.
argocd repo add <acr name>.azurecr.io --type helm --name <some name> --enable-oci --username <username> --password <password>.
You can also try to configure it declartively: issue-7121
apiVersion: v1
stringData:
enableOCI: "true"
name: my-oci-charts
password: token-password
type: helm
url: registry.gitlab.com/asdasd/charts
username: token-name
kind: Secret
metadata:
labels:
argocd.argoproj.io/secret-type: repository
name: my-oci-charts

Related

Could not install istio 1.6.3 demo profile on AWS EKS

I install with this
istioctl install --set profile=demo
and I got this error
2020-06-23T06:53:12.111697Zerrorinstallerfailed to create "PeerAuthentication/istio-s
ystem/grafana-ports-mtls-disabled": Timeout: request did not complete within requested timeout 30s
✘ Addons encountered an error: failed to create "PeerAuthentication/istio-system/grafana-ports-mtls-
disabled": Timeout: request did not complete within requested timeout 30s
- Pruning removed resources
Error: failed to apply manifests: errors occurred during operation
I assume there is something wrong either with
istioctl install and aws
your cluster
You could try to create new eks cluster and check if it works, if it´s not I would suggest to open new thread on istio github.
If you have same problem as #Possathon Chitpidakorn, you can use istio operator as a workaround to install istio, more about it below.
istio operator
Every operator implementation requires a custom resource definition (CRD) to define its custom resource, that is, its API. Istio’s operator API is defined by the IstioControlPlane CRD, which is generated from an IstioControlPlane proto. The API supports all of Istio’s current configuration profiles using a single field to select the profile. For example, the following IstioControlPlane resource configures Istio using the demo profile:
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
metadata:
namespace: istio-operator
name: example-istiocontrolplane
spec:
profile: demo
You can then customize the configuration with additional settings. For example, to disable telemetry:
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
metadata:
namespace: istio-operator
name: example-istiocontrolplane
spec:
profile: demo
telemetry:
enabled: false
How to install istio with istio operator
Prerequisites
Perform any necessary platform-specific setup.
Check the Requirements for Pods and Services.
Install the istioctl command.
Deploy the Istio operator:
istioctl operator init
This command runs the operator by creating the following resources in the istio-operator namespace:
The operator custom resource definition
The operator controller deployment
A service to access operator metrics
Necessary Istio operator RBAC rules
See the available istioctl operator init flags to control which namespaces the >controller and Istio are installed into and the installed Istio image sources and versions.
You can alternatively deploy the operator using Helm:
$ helm template manifests/charts/istio-operator/ \
--set hub=docker.io/istio \
--set tag=1.6.3 \
--set operatorNamespace=istio-operator \
--set istioNamespace=istio-system | kubectl apply -f -
Note that you need to download the Istio release to run the above command.
To install the Istio demo configuration profile using the operator, run the following command:
$ kubectl create ns istio-system
$ kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: demo
EOF
The controller will detect the IstioOperator resource and then install the Istio components corresponding to the specified (demo) configuration.

how to deploy helm chart from gitlab to eks?

I created a kubernetes cluster and linked it with eks.
I created also an helm chart and .gitla-ci.yml.
I want to add a new step to deploy my app using helm to the cluster, but I don't find a recent tutorial. All tutorials use gitlab-auto devops.
The image is hosted on gitlab.
How could I do to achieve this task ?
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: test
USER_GITLAB: kosted
APP_NAME: mebooks
REPO: gara-mebooks
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
stages:
- deploy
k8s-deploy:
stage: deploy
image: dtzar/helm-kubectl:3.1.2
only:
- develop
script:
# Read certificate stored in $KUBE_CA_PEM variable and save it in a new file
- echo $KUBE_URL
- kubectl config set-cluster gara-eks-cluster --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM"
- kubectl get pods
In the gitlab console I got
The connection to the server localhost:8080 was refused - did you
specify the right host or port? Running after_script 00:01 Uploading
artifacts for failed job 00:02 ERROR: Job failed: exit code 1
1 - Create arn role or user on IAM from your aws console
2 - connect to your bastion and add the arn role/user in the ConfigMap aws-auth
you can follow this to understand how it works (you are not the creator of the cluster paragraph) : https://aws.amazon.com/fr/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/
3- In your gitlab ci you just have to add this if it is a user you have created :
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- aws --profile default configure set aws_access_key_id "your access id"
- aws --profile default configure set aws_secret_access_key "your secret"
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
If you created a role note a user, you have just to do:
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3 -arn
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
Here I am adding my method, which is generic and can be used in any K8S environment without AWS CLI.
First, you need to convert your Kube Config to a base64 string:
cat ~/.kube/config | base64
Add the result string as a variable to your CI/CD pipeline settings of the project/group. In my example I used kube_config. Read more on how to add variables here.
Here is my CI YAML file:
stages:
# - build
# - test
- deploy
variables:
KUBEFOLDER: /root/.kube
KUBECONFIG: $KUBEFOLDER/config
k8s-deploy-job:
stage: deploy
image: dtzar/helm-kubectl:3.5.0
before_script:
- mkdir ${KUBEFOLDER}
- echo ${kube_config} | base64 -d > ${KUBECONFIG}
- helm version
- helm repo update
script:
- echo "Deploying application..."
- kubectl get pods
#- helm upgrade --install my-chart ./my-chart-folder
- echo "Application successfully deployed."
Inspired by:
https://about.gitlab.com/blog/2017/09/21/how-to-create-ci-cd-pipeline-with-autodeploy-to-kubernetes-using-gitlab-and-helm/

Unable to deploy the image in the kubernetes (AWS)

I am stuck in the last moment , cannot figure out the mistake , everything is working fine , but while deploying the image on the cluster getting the error:
The image is in the docker hub , from the aws , i used docker login , provided the credential also .
sudo kops validate cluster --state=s3://kops-storage-54321 -o yaml
output :
Using cluster from kubectl context: tests.k8s.local
nodes:
- hostname: ip-172-20-40-124.us-east-2.compute.internal
name: ip-172-20-40-124.us-east-2.compute.internal
role: master
status: "True"
zone: us-east-2a
- hostname: ip-172-20-112-165.us-east-2.compute.internal
name: ip-172-20-112-165.us-east-2.compute.internal
role: node
status: "True"
zone: us-east-2c
- hostname: ip-172-20-60-168.us-east-2.compute.internal
name: ip-172-20-60-168.us-east-2.compute.internal
role: node
status: "True"
zone: us-east-2a
Docker Login :
sudo docker login
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /home/ubuntu/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
while deploying the image getting the error:
Command:
ubuntu#ip-172-31-30-176:~$ sudo kubectl create deployment magicalnginx --image=amitranjan007/magicalnginx
Error:
error: no matches for extensions/, Kind=Deployment
You can check which apis support current Kubernetes object using
$ kubectl api-resources | grep deployment
deployments deploy apps true Deployment
This means that only apiVersion with apps is correct for Deployments (extensions is not supporting Deployment) from kubernetes version 1.16.
Change apiVersion to apps/v1 in deployment yaml.

Unable to get aws-iam-authenticator in config-map while applying through AWS CodeBuild

I am making CICD pipeline, using AWS CodeBuild to build and deploy application(service) to aws eks cluster. I have installed kubectl and aws-iam-authenticator properly,
getting aws instead of aws-iam-authenticator in command
kind: Config 
preferences: {} 
users: 
- name: arn:aws:eks:ap-south-1:*******:cluster/DevCluster 
user: 
exec: 
apiVersion: client.authentication.k8s.io/v1alpha1 
args: 
- eks 
- get-token 
- --cluster-name 
- DevCluster 
command: aws
env: null 
[Container] 2019/05/14 04:32:09 Running command kubectl get svc 
error: the server doesn't have a resource type "svc"
I donot want to edit configmap manually because it comes through pipeline.
As #Priya Rani said in the comments, he found the solution.
There is no issue with configmap file. Its all right.
1) I need to make Cloudformation (cluster+nodeinstance)trusted role to communicate with Codebuild by editing trusted role.
2) Need to add usedata section to communicate node instance with clusters.
Why you don't just load a proper/dedicated kube config file, by setting KUBECONFIG env variable inside your CICD pipeline, like this:
export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel
which would include a right command to use with aws-iam-authenticator:
#
#config-devel
#
...
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"

Enable grafana, kiali and jaeger after istio Installation?

I have installed ISTIO using Helm . I forgot to enable grafana, kiali and jaeger. How can i enable all these above services after i have installed istio?
Here is howto: from official repository.
you need to update values.yaml.
and turn on grafana, kiali and jaeger. For example with kiali change:
kiali:
enabled: false
to
kiali:
enabled: true
than rebuild the Helm dependencies:
helm dep update install/kubernetes/helm/istio
than upgrade your istio inside kubernetes:
helm upgrade install/kubernetes/helm/istio
that's it, hope it was helpful
So did you install direct or created a yaml from the templates ?
I would run the command you used to install but with template function and then add the options for jaeger,Kiali and grafana.