Do we really need kubectl to be part of kubernetes nodes ?
I don't know if there is a "correct" answer to this. But, I feel it is needed as a resource created should be manageable.
I understand your point where we all have gcloud and kubectl installed to our local machine but this is not the case for everyone. There could be developers who only have SSH access to the nodes and not enough IAM role or even gcloud installed in their local machine. So, if you are just able to SSH into any of the nodes, you should be able to view (or add, delete and edit as per the requirement) the resources on the cluster.
Personally, I never felt a need for this in my case as I have an editor role in my project but there could be situations/people who do not even have gcloud or kubectl installed or any such access or they are using a VDI or for security constraints (e.g developers are allowed to carry their laptops to their homes) the organization not allowing these developers to have access to any such thing on there local and enforces to access them using these nodes only.
So, in my opinion, this could one of the use cases why the creators decided to keep it there in every node.
One more possibility can be compatibility issues. Imagine you upgrade kubectl on your local machine to a newer version that is incompatible(or a default behaviour is changed) with one of your older k8 clusters.
So in a way it is kind of ensuring that there is always a compatible version of kubectl running on the nodes.
Note: in ideal situations commands and apis should be backward compatible.
Related
I have a linux ec2 instance hosting some legacy tools that I need to move to another AWS. We hoped to do this by making an AMI, sharing it to the other account, bringing up the new instance from the AMI, etc.
The problem: The instance was built on an AWS marketplace image of Debian - I must opt-in to the terms and conditions (and subscribe) in order to use the image. However, it is Debian8 (https://aws.amazon.com/marketplace/pp/prodview-5pgbnftzmrgec) which is no longer offered in the marketplace. Since the base image is no longer offered, I cannot opt in.
Is it possible to just upgrade/update the source instance to Debian 9 or 10 (both are still offered in the marketplace) so that I will be able to accept the T&C? Or is there some way to tell the AMI itself to use Debian 9 instead?
If not, I am looking at an old style file-based migration, and I was really hoping not to have to get into the guts of this server (it's a legacy integration) just yet. (It's on the to-do list, I just really wanted to get the migration to our AWS account finished first.)
I found this related question, but the suggested answer does not work - I can create and share the snapshot, even create a volume, but I cannot attach/mount the volume without "accept[ing] terms and subscribe" to the underlying product (Debian 8).
Can't export a EC2 AMI to another account because the AWS Marketplace OS is obsolete
Thanks for any advice!
i'dont really understand how to install something from GCP Marketplace to Compute Engine, which has been created already(windows servser). For instance i need to deploy Jenkins to practice with CI, but when i'm choosing that solution from Marketplace it's just deploying right below my VM in the list and looks like a separate process but i need this exactly on my RDP.
It is unlikely there is a good Marketplace based solution for your use case.
Depending on the type of solution you pick off the Marketplace, you'll get different behavior. Many of the solutions in the marketplace are self-contained -- they'll install the infrastructure they need to run, such as additional VMs. This is done via Deployment Manager. They won't install on VMs you already have provisioned. (This also lets the software and infrastructure be easily removed).
Others will just provide a container which you can place on an already running VM (for example, this jenkins package. These will require more work on your part to manage and keep updated, of course (and obviously find a container that works on your windows machine if this is the route you want to go). I don't currently see an obvious candidate in the market for Jenkins.
A third type of marketplaces package is "click to deploy". These will bring up a GKE cluster to run the containers on, but this likely isn't what you're looking for if you don't want additional VMs.
I installed my existing Kubernetes Cluster (1.8) running in AWS using KOPS.
I would like to add Windows Container to the existing cluster but I can not find the right solution! :(
I thought of following these given steps given in:
https://kubernetes.io/docs/getting-started-guides/windows/
I downloaded the node binaries and copied it to my Windows machine (Kubelet, Kube-dns, kube-proxy, kubectl) but I got a little confused about the multiple networking options.
They have also given the kubeadmin option to join the node to my Master, which I have no idea why since I used Kops to create my cluster.
Can someone advise or help me on how I can get my windows node added?
KOPS is really good if the default architecture satisfies your requirements, if you need to make some changes it will give you some trouble. For example I needed to add a GPU Node, I was able to add it, but unable to make this process automatic, being unable to create an auto scaling group.
Kops has a lot of pros, like creating all the cluster in a transparent way.
Do you really need a windows node?
If yes, try to launch a cluster using kube-adm, then joining the windows node to this cluster.
Kops will take some time to add this windows nodes feature.
I am very new to Kubernetes so apologies for gaps in my understanding and possibly incorrect wording.
I am developing on my local MacBook Pro, which is somewhat resource constrained. My actual payload is a database, which is already running in a Docker container, but obviously needs some sort of persistent storage.
The individual containers also need to talk to each over network and some of them need a channel (port open) to the outside world.
I would like to set up a single Kubernetes cluster for dev and testing purposes that I can later easily deploy to to bare metal servers or a cloud vendor - Google and AWS.
From reading so far it looks that I can, for example use minikube and orchestrate that cluster on top VirtualBox that I am already running.
How would that then map to an actual deployment in the cloud?
What additional tools do I need to get it all running, especially with regards to persistent storage and network?
Will it map easily to the cloud?
What configuration management software would you recommend to maintain all that configuration?
A very short answer is that it's hard to do this properly.
One of the best options I know of is LinuxKit, it allows you to build identical images that you can run on any of the popular cloud providers or in a data centre of your own, or desktop hypervisor. In fact, this is what Docker for Mac is based on.
Disclaimer: I am one of the LinuxKit contributors.
Generally you get more or less the same kubernetes, regardless of the method you spin up the cluster. Although, comparing to cloud, other deployments will usually lack in what cloud provides by default with kubes built-in cloud providers. Some very important features it relates to are things like out of the box support for LoadBalancer type of services or automatic PersistentVolume provisioning.
If you're ok with not having them, or configuring them additionally for your dev/test env then you should be quite fine.
In scope of PVC/PV, the lack of automatic PV provisioner (unless you set up something like ie. GlusterFS with Heketi to support this) will mean that you will have to provision every PV manualy on the dev/test cluster in opposite to ability of this happening in automatic fashion on cloud.
Also, as you begin, there are ought to be some minor differences between your dev/test setup and prod, so you might really want to investigate manifest templating and management solutions like helm from thew day one of your work with deployments to kubernetes. I know it would save m a lot of headache if I did that my self when I started doing kube.
Focusing a bit on your inquiry on the database, I think you have two options (assuming cloud is still an option for you):
use a docker database image and mount volumes
use an RDS instance in case of aws
I believe that in case of databases the case of volumes is generally not recommended.
What I would suggest you do is (once you grasp a bit the basic concepts, mainly Services, to
create an RDS instance and your needed databases therein
expose this RDS instance as a Service as type ExternalName
I have been doing the following and so far is working:
apiVersion: v1
kind: Service
metadata:
name: my-database-service
namespace: some-namespece
spec:
type: ExternalName
externalName: <my-rds-endpoint>
After that, you rest of k8s services can reach this service via my-database-service
I think this approach is more db-wise consistent and saves the volumes' hussle.
That being said, I acknowledge that the guidelines in terms of "select-this-if-you-go-for-cloud" or "that-if-you-go-on-prem" are not quite clear yet.
My experience so far indicates that:
most likely for on prem (not just your localhost) the way to go is kubeadm
for aws I have been having a pleasant experience with kops so far.
there is also the Canonical solution that seems to use a stack (conjure-up/juju) to help deploy their own slightly modified version of Kubernetes that they claim suits both cloud/on-prem (haven't tried it at all).
I'm wondering how people are deploying a production-caliber Kubernetes cluster in AWS and, more importantly, how they chose their approach.
The k8s documentation points towards kops for Debian, Ubuntu, CentOS, and RHEL or kube-aws for CoreOS/Container Linux. Among these choices it's not clear how to pick one over the others. CoreOS seems like the most compelling option since it's designed for container workloads.
But wait, there's more.
bootkube seems to be next iteration of the CoreOS deployment technology and is on the roadmap for inclusion within kube-aws. Should I wait until kube-aws uses bootkube?
Heptio recently announced a Quickstart architecture for deploying k8s in AWS. This is the newest approach and so probably the least mature approach but it does seem to have gained traction from within AWS.
Lastly kubeadm is a thing and I'm not really sure where it fits into all of this.
There are probably more approaches that I'm missing too.
Given the number of options with overlapping intent it's very difficult to choose a path forward. I'm not interested in a proof-of-concept. I want to be able to deploy a secure, highly-available cluster for production use and be able to upgrade the cluster (host OS, etcd, and k8s system components) over time.
What did you choose and how did you decide?
I'd say pick anything which fit's your needs (see also Picking the right solution)...
Which could be:
Speed of the cluster setup
Integration in your existing toolchain
e.g. kops integrates with Terraform which might be a good fit for some prople
Experience within your team/company/...
e.g. how comfortable are you with the related Linux distribution
Required maturity of the tool itself
some tools are very alpha, are you willing to play to role of an early adaptor?
Ability to upgrade between Kubernetes versions
kubeadm has this on their agenda, some others prefer to throw away clusters instead of upgrading
Required integration into external tools (monitoring, logging, auth, ...)
Supported cloud providers
With your specific requirements I'd pick the Heptio or kubeadm approach.
Heptio if you can live with the given constraints (e.g. predefined OS)
kubeadm if you need more flexibility, everything done with kubeadm can be transferred to other cloud providers
Other options for AWS lower on my list:
Kubernetes the hard way - using this might be the only true way to setup a production cluster as this is the only way you can fully understand each moving part of the system. Lower on the list, because often the result from any of the tools might just be more than enough, even for production.
kube-up.sh - is deprecated by the community, so I'd not use it for new projects
kops - my team had some strange experiences with it which seemed due to our (custom) needs back then (existing VPC), that's why it's lower on my list - it would be #1 for an environment where Terraform is used too.
bootkube - lower on my list, because it's limitation to CoreOS
Rancher - interesting toolchain, seems to be too much for a single cluster
Offtopic: If you don't have to run on AWS, I'd also always consider to rather run on GCE for production workloads, as this is a well managed platform rather than something you've to build yourself.