I have my AKS cluster running in Azure. I have my Azure DevOps pipeline having a service connection to this AKS cluster. I would like to run istioctl from Azure DevOps Pipeline as taks. I tried to add Istio task and wanted to run istioctl operator init.
Operator init step is failed. Can some please help.
I changed to use Azure CLI task instead of third party Istio task. I am able to install Istio via Azure Pipeline on an AKS cluster. Few things to do at setup level before running 1) download Istio 2) operator init, etc are 1) Azure subscription need to be set in the in line script and then 2) context to the resource group and my aks cluster 3) converted my classic pipeline to yaml. .. and here is the working yaml pipeline for installing istio via azure pipeline for upper environments like uat, staging, prod where we don't need to run commands one by one.. thanks for the idea of use shell and not using the 3rd party.
trigger: none
jobs:
- job: Job_1
displayName: Agent job 1
pool:
vmImage: ubuntu-20.04
steps:
- checkout: self
- task: AzureCLI#2
displayName: Azure CLI [ 1 ]
inputs:
connectedServiceNameARM: $(DEV-CONN-SVC-NAME-ARM)
scriptType: bash
scriptLocation: inlineScript
inlineScript: >-
az account set --subscription $(DEV-SUBSCRIPTION)
az aks get-credentials --resource-group $(DEV-RESOURCE-GROUP) --name $(DEV-RESOURCE-NAME)
ls -lrt
pwd
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.9.1 sh -
cd istio-1.9.1
export PATH=$PWD/bin:$PATH
echo "run istioctl version"
echo "************************************************************"
istioctl version
echo "************************************************************"
echo "run istioctl operator init"
echo "************************************************************"
istioctl operator init
echo "************************************************************"
Related
I am using the new way to deploy google cloud run (and loving!) but how can I pass the service name on the command?
I saw the docs but nothing related.
gcloud beta run deploy --source .
Service name: my-cloud-run-name
How can I pass the service name by the command line? Something like:
gcloud beta run deploy --name my-cloud-run-name --source .
Use a positional argument: gcloud beta run deploy SERVICE-NAME --source .
I am trying to create a Gitlab CI CD pipeline to build my java spring project and deploy it to amazon eks.
I have followed instruction as in this article.
This is the gitlab-ci-cd.yml file to apply the deployment script.
k8s-deploy-dev:
image: docker.io/sulemanhasib43/eks:latest
stage: k8-deploy
tags:
- kubernetes
before_script: *kubectl_config
script:
- sed -i "s#$CONTAINER_IMAGE#$CONTAINER_IMAGE:dev$CI_PIPELINE_IID#g" deployment.yaml
- kubectl apply -f deployment.yaml -n dev
only:
- master
But I got an issue when applying my deployment.yml file.As following image I got an error as
system:node:"user" cannot create resource ...
But when I am adding the eks cluster to the gitlab, I have created a user with cluster-admin role.
I have also tried adding roles to the system:node ClusterRole.
I created a kubernetes cluster and linked it with eks.
I created also an helm chart and .gitla-ci.yml.
I want to add a new step to deploy my app using helm to the cluster, but I don't find a recent tutorial. All tutorials use gitlab-auto devops.
The image is hosted on gitlab.
How could I do to achieve this task ?
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: test
USER_GITLAB: kosted
APP_NAME: mebooks
REPO: gara-mebooks
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
stages:
- deploy
k8s-deploy:
stage: deploy
image: dtzar/helm-kubectl:3.1.2
only:
- develop
script:
# Read certificate stored in $KUBE_CA_PEM variable and save it in a new file
- echo $KUBE_URL
- kubectl config set-cluster gara-eks-cluster --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM"
- kubectl get pods
In the gitlab console I got
The connection to the server localhost:8080 was refused - did you
specify the right host or port? Running after_script 00:01 Uploading
artifacts for failed job 00:02 ERROR: Job failed: exit code 1
1 - Create arn role or user on IAM from your aws console
2 - connect to your bastion and add the arn role/user in the ConfigMap aws-auth
you can follow this to understand how it works (you are not the creator of the cluster paragraph) : https://aws.amazon.com/fr/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/
3- In your gitlab ci you just have to add this if it is a user you have created :
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- aws --profile default configure set aws_access_key_id "your access id"
- aws --profile default configure set aws_secret_access_key "your secret"
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
If you created a role note a user, you have just to do:
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3 -arn
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
Here I am adding my method, which is generic and can be used in any K8S environment without AWS CLI.
First, you need to convert your Kube Config to a base64 string:
cat ~/.kube/config | base64
Add the result string as a variable to your CI/CD pipeline settings of the project/group. In my example I used kube_config. Read more on how to add variables here.
Here is my CI YAML file:
stages:
# - build
# - test
- deploy
variables:
KUBEFOLDER: /root/.kube
KUBECONFIG: $KUBEFOLDER/config
k8s-deploy-job:
stage: deploy
image: dtzar/helm-kubectl:3.5.0
before_script:
- mkdir ${KUBEFOLDER}
- echo ${kube_config} | base64 -d > ${KUBECONFIG}
- helm version
- helm repo update
script:
- echo "Deploying application..."
- kubectl get pods
#- helm upgrade --install my-chart ./my-chart-folder
- echo "Application successfully deployed."
Inspired by:
https://about.gitlab.com/blog/2017/09/21/how-to-create-ci-cd-pipeline-with-autodeploy-to-kubernetes-using-gitlab-and-helm/
I have a service created on Google Cloud run that I am able to deploy manually through the Google Cloud Console UI using an image on Container registry. But deployment from CLI is failing. Here is the command I am using and the error I get. I am not able to understand what I am missing:
$ gcloud beta run deploy service-name --platform managed --region region-name --image image-url
Deploying container to Cloud Run service [service-name] in project [project-name] region [region-name]
X Deploying...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: (gcloud.beta.run.deploy) INVALID_ARGUMENT: The request has errors
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: spec.revisionTemplate.spec.container.ports should be empty
field: spec.revisionTemplate.spec.container.ports
Update 1:
I have updated the SDK using gcloud components update, but I still have the same issue
Here's my SDK Version
$gcloud version
Google Cloud SDK 270.0.0
beta 2019.05.17
bq 2.0.49
core 2019.11.04
gsutil 4.46
I am using a multistage docker build. Here's my Dockerfile:
FROM custom-dev-image
COPY . /project_dir
WORKDIR /project_dir
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
/usr/local/bin/go build -a \
-ldflags '-w -extldflags "-static"' \
-o /root/go/bin/executable ./cmds/project/main.go
FROM alpine:3.10
ENV GIN_MODE=release APP_NAME=project_name
COPY --from=0 /root/go/bin/executable /usr/local/bin/
CMD executable
I had this same problem and I assume it was because I had older Cloud Run deployment that was created before I had ran gcloud components update since some update.
I was able to fix it by deleting the whole Cloud Run service (through the GUI) and deploying it from scratch again (via terminal). I noticed that the ports: definition disappeared from the YAML once I did this.
After this I could do deployments normally.
This was a bug in Cloud Run. It has been fixed and deploying with CLI is working for me now. Here's the link to the issue I had raised with Google Cloud which has a response from them https://issuetracker.google.com/issues/144069696.
Can you help me find a useful step-by-step guide or a Gist outlining in detail how to configure CircleCI (using 2.0 syntax) to deploy to AWS EC2?
I understand the basic requirements and the moving pieces, but unsure what to put in the .circleci/config.yml file in the deploy step.
So far I got:
A "Hello World" Node.js app which is building successfully in CircleCI (just without the deploy step)
A running EC2 instance (Ubuntu 16.04)
An IAM user with sufficient permissions added to CircleCI for that particular job
Can you help out with the CircleCI deploy step?
Following your repository, you could create a script just like that: deploy.sh
#!/bin/bash
echo "Start deploy"
cd ~/circleci-aws
git pull
npm i
npm run build
pm2 stop build/server
pm2 start build/server
echo "Deploy end"
And in your .circleci/conf.yml you do it:
deploy:
docker:
- image: circleci/node:chakracore-8.11.1
steps:
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- run:
name: AWS EC2 deploy
command: |
#upload all the code to machine
scp -r -o StrictHostKeyChecking=no ./ ubuntu#13.236.1.107:/home/circleci-aws/
#Run script inside of machine
ssh -o StrictHostKeyChecking=no ubuntu#13.236.1.107 "./deploy.sh"
But this is so ugly, try something like AWS Codedeploy or ecs for using containers.