Could not install istio 1.6.3 demo profile on AWS EKS - amazon-web-services

I install with this
istioctl install --set profile=demo
and I got this error
2020-06-23T06:53:12.111697Zerrorinstallerfailed to create "PeerAuthentication/istio-s
ystem/grafana-ports-mtls-disabled": Timeout: request did not complete within requested timeout 30s
✘ Addons encountered an error: failed to create "PeerAuthentication/istio-system/grafana-ports-mtls-
disabled": Timeout: request did not complete within requested timeout 30s
- Pruning removed resources
Error: failed to apply manifests: errors occurred during operation

I assume there is something wrong either with
istioctl install and aws
your cluster
You could try to create new eks cluster and check if it works, if it´s not I would suggest to open new thread on istio github.
If you have same problem as #Possathon Chitpidakorn, you can use istio operator as a workaround to install istio, more about it below.
istio operator
Every operator implementation requires a custom resource definition (CRD) to define its custom resource, that is, its API. Istio’s operator API is defined by the IstioControlPlane CRD, which is generated from an IstioControlPlane proto. The API supports all of Istio’s current configuration profiles using a single field to select the profile. For example, the following IstioControlPlane resource configures Istio using the demo profile:
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
metadata:
namespace: istio-operator
name: example-istiocontrolplane
spec:
profile: demo
You can then customize the configuration with additional settings. For example, to disable telemetry:
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
metadata:
namespace: istio-operator
name: example-istiocontrolplane
spec:
profile: demo
telemetry:
enabled: false
How to install istio with istio operator
Prerequisites
Perform any necessary platform-specific setup.
Check the Requirements for Pods and Services.
Install the istioctl command.
Deploy the Istio operator:
istioctl operator init
This command runs the operator by creating the following resources in the istio-operator namespace:
The operator custom resource definition
The operator controller deployment
A service to access operator metrics
Necessary Istio operator RBAC rules
See the available istioctl operator init flags to control which namespaces the >controller and Istio are installed into and the installed Istio image sources and versions.
You can alternatively deploy the operator using Helm:
$ helm template manifests/charts/istio-operator/ \
--set hub=docker.io/istio \
--set tag=1.6.3 \
--set operatorNamespace=istio-operator \
--set istioNamespace=istio-system | kubectl apply -f -
Note that you need to download the Istio release to run the above command.
To install the Istio demo configuration profile using the operator, run the following command:
$ kubectl create ns istio-system
$ kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: demo
EOF
The controller will detect the IstioOperator resource and then install the Istio components corresponding to the specified (demo) configuration.

Related

Connecting an app in ArgoCD to use a Helm OCI repository

I can see the Argo seems to support OCI repositories but I can't seem to get this to work.
First I can only seem to add repositories through the CLI because there is no option for enabling OCI.
argocd repo add <uri> --type helm --name name --enable-oci
However, when adding an app using the UI the argo server is logging "unsupported protocol scheme ''" when selecting the repository. I have tried a URI with HTTPS and empty (as mentioned in the issues).
Is it possible to use the UI for OCI repositories or is it a command line thing only?
I am using Argo version 2.0.4
I used the following command and it worked for me.
argocd repo add <acr name>.azurecr.io --type helm --name <some name> --enable-oci --username <username> --password <password>.
You can also try to configure it declartively: issue-7121
apiVersion: v1
stringData:
enableOCI: "true"
name: my-oci-charts
password: token-password
type: helm
url: registry.gitlab.com/asdasd/charts
username: token-name
kind: Secret
metadata:
labels:
argocd.argoproj.io/secret-type: repository
name: my-oci-charts

How to manage multiple GKE projects in one Google Cloud Account [duplicate]

This question already has answers here:
Run a single kubectl command for a specific project and cluster?
(2 answers)
Closed 2 years ago.
Given a situation where I have three separate GKE instances in different Google Cloud projects under the same billing account, how can I configure kubectl so that the commands I execute with it only apply to a specific cluster?
kubectl access to Kubernetes API servers are managed by configuration contexts.
Here is some documentation for how to do so. In a nutshell, you would stand up multiple Kubernetes clusters and then specify a configuration like so:
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
name: development
- cluster:
name: scratch
users:
- name: developer
- name: experimenter
contexts:
- context:
name: dev-frontend
- context:
name: dev-storage
- context:
name: exp-scratch
To automatically generate one, you can run the following commands:
# Add cluster details to the file
kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
kubectl config --kubeconfig=config-demo set-cluster scratch --server=https://5.6.7.8 --insecure-skip-tls-verify
# Add user details to the configuration file
kubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
kubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password
# Add context details to the configuration file
kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer
kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
After that, you can safe the context. Then, going forward, when you run a kubectl command, the action will apply to the cluster and namespace listed in the specifeid context. For example:
kubectl config --kubeconfig=config-demo use-context dev-frontend
To then change the context to another one you specified:
kubectl config --kubeconfig=config-demo use-context exp-scratch

Unable to get aws-iam-authenticator in config-map while applying through AWS CodeBuild

I am making CICD pipeline, using AWS CodeBuild to build and deploy application(service) to aws eks cluster. I have installed kubectl and aws-iam-authenticator properly,
getting aws instead of aws-iam-authenticator in command
kind: Config 
preferences: {} 
users: 
- name: arn:aws:eks:ap-south-1:*******:cluster/DevCluster 
user: 
exec: 
apiVersion: client.authentication.k8s.io/v1alpha1 
args: 
- eks 
- get-token 
- --cluster-name 
- DevCluster 
command: aws
env: null 
[Container] 2019/05/14 04:32:09 Running command kubectl get svc 
error: the server doesn't have a resource type "svc"
I donot want to edit configmap manually because it comes through pipeline.
As #Priya Rani said in the comments, he found the solution.
There is no issue with configmap file. Its all right.
1) I need to make Cloudformation (cluster+nodeinstance)trusted role to communicate with Codebuild by editing trusted role.
2) Need to add usedata section to communicate node instance with clusters.
Why you don't just load a proper/dedicated kube config file, by setting KUBECONFIG env variable inside your CICD pipeline, like this:
export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel
which would include a right command to use with aws-iam-authenticator:
#
#config-devel
#
...
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"

EKS AWS: Can't connect Worker Node

I am a bit very stuck on the step of Launching worker node in the AWS EKS guide. And to be honest, at this point, I don't know what's wrong.
When I do kubectl get svc, I get my cluster so that's good news.
I have this in my aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::Account:role/rolename
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
Here is my config in .kube
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERTIFICATE
server: server
name: arn:aws:eks:region:account:cluster/clustername
contexts:
- context:
cluster: arn:aws:eks:region:account:cluster/clustername
user: arn:aws:eks:region:account:cluster/clustername
name: arn:aws:eks:region:account:cluster/clustername
current-context: arn:aws:eks:region:account:cluster/clustername
kind: Config
preferences: {}
users:
- name: arn:aws:eks:region:account:cluster/clustername
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- clustername
command: aws-iam-authenticator.exe
I have launched an EC2 instance with the advised AMI.
Some things to note :
I launched my cluster with the CLI,
I created some Key Pair,
I am not using the Cloudformation Stack,
I attached those policies to the role of my EC2 : AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly, AmazonEKSWorkerNodePolicy.
It is my first attempt at kubernetes and EKS, so please keep that in mind :). Thanks for your help!
Your config file and auth file looks right. Maybe there is some issue with the security group assignments? Can you share the exact steps that you followed to create the cluster and the worker nodes?
And any special reason why you had to use the CLI instead of the console? I mean if it's your first attempt at EKS, then you should probably try to set up a cluster using the console at least once.
Sometimes for whatever reason aws_auth configmap does not apply automatically. So we need to add them manually. I had this issue, so leaving it here in case it helps someone.
Check to see if you have already applied the aws-auth ConfigMap.
kubectl describe configmap -n kube-system aws-auth
If you receive an error stating "Error from server (NotFound): configmaps "aws-auth" not found", then proceed
Download the configuration map.
curl -o aws-auth-cm.yaml https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml
Open the file with your favorite text editor. Replace <ARN of instance role (not instance profile)> with the Amazon Resource Name (ARN) of the IAM role associated with your nodes, and save the file.
Apply the configuration.
kubectl apply -f aws-auth-cm.yaml
Watch the status of your nodes and wait for them to reach the Ready status.
kubectl get nodes --watch
You can also go to your aws console and find the worker node being added.
Find more info here

vsystem-vrep of vora at Waiting: CrashLoopBackOff

Trying to setup Vora 2 on an AWS kops k8s cluster.
The pod vsystem-vrep cannot start.
In the logfile on the node I see:
sudo cat vsystem-vrep_30.log
{"log":"2018-03-27 12:54:04.164349|+0000|INFO |Starting Kernel NFS Server||vrep|1|Start|server.go(41)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.164897827Z"}
{"log":"2018-03-27 12:54:04.164405|+0000|INFO |Creating directory /exports||dir-handler|1|makeDir|dir_handler.go(40)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.164919387Z"}
{"log":"2018-03-27 12:54:04.164423|+0000|INFO |Listening for private API on port 8738||vrep|18|func1|server.go(45)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.164923893Z"}
{"log":"2018-03-27 12:54:04.166992|+0000|INFO |Configuring Kernel NFS Server||vrep|1|configure|server.go(126)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.167109138Z"}
{"log":"2018-03-27 12:54:04.219089|+0000|INFO |Configuring Kernel NFS Server||vrep|1|configure|server.go(126)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.219235263Z"}
{"log":"2018-03-27 12:54:04.230256|+0000|FATAL|Error starting NFS server: RPC service for NFS server has not been correctly registered||vrep|1|main|server.go(51)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.230526346Z"}
How can I solve this?
When installing Vora 2.1 in AWS with kops, you need to first setup a RWX storage class which is needed by vsystem (the default AWS storage class is read only). During installation, you need to point to that storage class using parameter --vsystem-storage-class. Additionally, parameter --vsystem-load-nfs-modules needs to be set. I suspect that the error happened because that last parameter was missing.
Example, how a call of install.sh would look like:
./install.sh --accept-license --deployment-type=cloud --namespace=xxx
--docker-registry=123456789.dkr.ecr.us-west-1.amazonaws.com
--vora-admin-username=xxx --vora-admin-password=xxx
--cert-domain=my.host.domain.com --interactive-security-configuration=no
--vsystem-storage-class=aws-efs --vsystem-load-nfs-modules
A RWX storage class can e.g. be created as following
Create an EFS file system in same region as kops cluster - see https://us-west-2.console.aws.amazon.com/efs/home?region=us-west-2#/filesystems
Create file system
Select VPC of kops cluster
Add kops master and worker security groups to mount target
Optionally give it a name (e.g. same as your kops cluster, to know what it is used for)
Use default options for the remaining
Once created, note the DNS name (similar to fs-1234e567.efs.us-west-2.amazonaws.com).
Create persistent volume and storage class for Vora
E.g. use yaml files similar to below and point to the newly created EFS file system.
$ cat create_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: vsystem-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: aws-efs
nfs:
path: /
server: fs-1234e567.efs.us-west-2.amazonaws.com
$ cat create_sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: xyz.com/aws-efs
kubectl create -f create_pv.yaml
kubectl create -f create_sc.yaml
-- check if newly created pv and sc exist
kubectl get pv
kubectl get storageclasses