Import manually created K8s cluster into KOps - amazon-web-services

It's been sometime I've visited all the web pages carrying word "KOps import" but did not find a way to import my manually created K8s cluster. Manually created cluster means "Deployed Infra on AWS using Terraform and Kubernetes using Terraform's provisioner script as Shell script". Now as I see managing the environment manually is a pain, I look forward to move it under KOps. For that I have done the following so far:
Installed aws cli, kubectl and kops in my local machine.
Created KOps user with policies AmazonEC2FullAccess,
AmazonRoute53FullAccess, AmazonS3FullAccess, IAMFullAccess,
AmazonVPCFullAccess and generated access and secret keys.
Configured credentials using aws configure.
Created S3 bucket to store state.
Set env variables like Region and Cluster name.
Finally, ran kops import command as below:
kops import cluster --region ${REGION} --name ${OLD_NAME}
But encountered below error:
Cluster.kops "jjm-prod-use1-kubernetes" not found
Verbosed:
$ kops import cluster --region ${REGION} --name ${OLD_NAME} -v 10
I0131 16:32:12.059651 25683 factory.go:68] state store s3://kops-state-store-jjm
I0131 16:32:13.133145 25683 s3context.go:194] found bucket in region "us-east-1"
I0131 16:32:13.133174 25683 s3fs.go:220] Reading file "s3://kops-state-store-jjm/jjm-prod-use1-kubernetes/config"
Which made me serious about posting this question. Is there any possible way where a K8s cluster created except using kubeup.sh can be brought under the control of KOps ? Please advise.
Note: There's no way I can re-create (destroy and create) the clusters as they are running in production.
EDIT: I know this can be achieved only the cluster was setup using kubeup.sh. But is there any other way ?

That is only possible with cluster bootstrapped via kube-up.sh script as officialy announced in Kops documentation pages. Actually, kube-up.sh has been excluded from the list of supported Kubernetes installation tools for AWS. Although, cluster composed by kube-up.sh provides a lot of customization settings which are specifically applicable to AWS, the initial script uses environmental variables to define these settings. Therefore, I assume that it's quite hard to achieve in your case.

Related

How to expose kubeconfig file after create an EKS cluster by aws_eks_cluster with Terraform?

eks module can generate an output kubeconfig
aws_eks_cluster resource doesn't has this feature.
Why don't add this feature?
This feature is deprecated on v18 of the aws module, from docs:
Support for managing kubeconfig and its associated local_file
resources have been removed; users are able to use the awscli provided
aws eks update-kubeconfig --name <cluster_name> to update their local
kubeconfig as necessary
The terraform eks module exposes that file by default, you can take a look at their files or even use their module. It's relatively easy to setup and works great. Links : eks module, I am not 100% if this is the section for it but you can take the look at their whole repo.

How can I access AWS from a pod running under EKS/KOPS Cluster?

I have a pod which I plan to run under EKS & KOPS managed cluster.
The pod does some calculations and I want to write the results to DynamoDB.
How can I access AWS DynamoDB from it?
Also, say I want to package it using helm, is there an option that all of the required configuration to access AWS would be only pod helm package related without any cluster configuration?
You need AWS IAM Role mapped to a ServiceAccount. Try using this user guide: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
also for kops you can use Kiam project, think of it as an IAM proxy https://github.com/uswitch/kiam

Generating a kubeconfig for access to an Amazon EKS cluster

Given a scenario where I have two Kubernetes clusters, one hosted on AWS EKS and the other on another cloud provider, I would like to manage the EKS cluster from the other cloud provider. What's the easiest way to authenticate such that I can do this?
Would it be reasonable to generate a kubeconfig, where I embed the result from aws get-token (or something like that) to the cluster on the other cloud provider? Or are these tokens not persistent?
Any help or guidance would be appreciated!
I believe the most correct is the way described in Create a kubeconfig for Amazon EKS
yes, you create kubeconfig with aws eks get-token and later add newly created config to KUBECONFIG environment variable , eg
export KUBECONFIG=$KUBECONFIG:~/.kube/config-aws
or you can add it to .bash_profile for your convenience
echo 'export KUBECONFIG=$KUBECONFIG:~/.kube/config-aws' >> ~/.bash_profile
For detailed steps please refer to provided url.
I had this use case where I needed to work with multi-cloud providers.
So I created kubech to deal with that situation and manage multiple clusters simultaneously.
Assuming that you have a linux platform on the second cloud provider, you can use the following command for generating kube config file:
aws eks update-kubeconfig --region <region-code> --name <cluster-name>
You can change the file using --kubeconfig flag.
Ref: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

how to deploy kubernetes application in EKS through Jenkins

I'm trying to deploy Kubernetes application in AWS EKS through Jenkins.
I visited few of blogs, they mentioned Jenkins X. But JenkinsX need to be configured separately. But as per instruction, we need to use our existing Jenkins for K8S app deployment.
Note : AWS EKS and Jenkins are Separate machine(We using our existing Jenkins). I may need to create New EKS environment based on requirement.
Please suggest if any AWS EKS plugin for Jenkins which can be used for deployment.
Else
Is there any way to create custom Bash script(automation script) for deploying K8S application in AWS EKS?
My Research here is : Actually AWS is providing api/sdk support for only Creating/Managing Clusters but not deploying the application in k8s environment(using kubectl).
Probably creating cluster we can do it through SDK. but How to deploy k8s application on remotely(because Jenkins is running in another machine).
Why not configuring kubectl for jenkins and deploy apps using kubectl apply deployment.yaml command?
Once you have kubectl config you can save it as secret text. I had an assignment for the interview and here is an example of such deployment:
https://github.com/mtuktarov/hello
It uses shared lib:
https://github.com/mtuktarov/hello-jenkins-lib
Finally I'm done this exercise by creating Bash automation script, following these steps:
Created Docker image with application binary.
Created EKS Cluster using eksctl create cluster <PARAM>, which creates EKS Control Plane and Worker nodes.
Created Kubernetes Deployment File using Docker image and Deployed using kubectl apply <PARAM> commandline.
Exposed the application using kubectl expose <PARAM> cli.
Latest Update From AWS EKS Service:
AWS recently announced AWS EKS Worker node creation support using AWS SDK. So now Creating EKS environment can be done using SDK itself.
===================
Update:
Now AWS has Supported Creating worker node thorugh UI and AWS SDK.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EKS.html#createNodegroup-property

How to use AWS IAM Role for Packer build command inside Jenkins Pipeline using Kubernetes / Docker Slave

I'm using Jenkins Pipeline and Packer to create AMI inside an AWS Account.
The Jenkins uses Kubernetes cluster as slave (using a cloud plugin that allows me to parameter docker pods template),
I have a pipeline that pull git project with the packer template in it and run packer validate command which is a success. Than, it runs packer build and i get the following error:
[1;31mBuild 'Amazon Linux 2 Classic' errored: No valid credential sources found for AWS Builder. Please see https://www.packer.io/docs/builders/amazon.html#specifying-amazon-credentials for more information on providing credentials for the AWS Builder.[0m
I also use Kube2iam to provide roles on my slave containers.
In my packer template, i don't define any aws credentials since I don't want to use it but role. Do you know if I have something to do inside the packer template to indicate the role to use ?
Best Regards,
Tony.
From what I understand, you are running Jenkins inside a Kubernetes cluster running on AWS EC2 instances? If so, the Jenkins agents running the build should be able to read available roles from the metadata of the instance they're running on.
In this case, the process would be to assign the desire IAM role to instances and Kubernetes should be able to handle that.