running windows Container in Kubernetes over AWS cloud - amazon-web-services

I installed my existing Kubernetes Cluster (1.8) running in AWS using KOPS.
I would like to add Windows Container to the existing cluster but I can not find the right solution! :(
I thought of following these given steps given in:
https://kubernetes.io/docs/getting-started-guides/windows/
I downloaded the node binaries and copied it to my Windows machine (Kubelet, Kube-dns, kube-proxy, kubectl) but I got a little confused about the multiple networking options.
They have also given the kubeadmin option to join the node to my Master, which I have no idea why since I used Kops to create my cluster.
Can someone advise or help me on how I can get my windows node added?

KOPS is really good if the default architecture satisfies your requirements, if you need to make some changes it will give you some trouble. For example I needed to add a GPU Node, I was able to add it, but unable to make this process automatic, being unable to create an auto scaling group.
Kops has a lot of pros, like creating all the cluster in a transparent way.
Do you really need a windows node?
If yes, try to launch a cluster using kube-adm, then joining the windows node to this cluster.
Kops will take some time to add this windows nodes feature.

Related

Planning nodegroups for Gitlab with EKS cluster

I am in the process of building an infrastructure for my gitlab instance using AWS EKS. I have already created an EKS cluster, added a managed node group and installed the gitlab-runner in the cluster. In this node group I can now run my pipelines as usual. In my gitlab instance, I have several projects that each have an MR pipeline. In addition, I run another pipeline overnight in each project. These pipelines that run overnight sometimes require certain HW resources such as an FPGA board or an SDR. I want to clarify that I don't want to build and deploy apps in my cluster. The cluster should be used exclusively to run the pipelines.
Currently I am trying to create the right setup for the node groups and would like to draw on community experience in this regard.
What do I want to achieve?
I want to be able to determine the HW for the individual jobs, such as building the code. It should be possible to speed up the process with more nodes or a stronger instance type.
I also want to have a node group for external resources with special HW (FPGA boards, SDRs) to use in my tests.
Questions:
What node groups and settings are suitable in your experience?
How to run jobs in single node groups via gitlab? Is this possible with tags? How do I address the individual groups in gitlab?
What is the best way to manage external HW resources, like the ones in my local lab?
I would be very happy if you share your experiences with me! Every help is appreciated! Thanks a lot!

GKE: kubernetes nodes don't require kubectl package

Do we really need kubectl to be part of kubernetes nodes ?
I don't know if there is a "correct" answer to this. But, I feel it is needed as a resource created should be manageable.
I understand your point where we all have gcloud and kubectl installed to our local machine but this is not the case for everyone. There could be developers who only have SSH access to the nodes and not enough IAM role or even gcloud installed in their local machine. So, if you are just able to SSH into any of the nodes, you should be able to view (or add, delete and edit as per the requirement) the resources on the cluster.
Personally, I never felt a need for this in my case as I have an editor role in my project but there could be situations/people who do not even have gcloud or kubectl installed or any such access or they are using a VDI or for security constraints (e.g developers are allowed to carry their laptops to their homes) the organization not allowing these developers to have access to any such thing on there local and enforces to access them using these nodes only.
So, in my opinion, this could one of the use cases why the creators decided to keep it there in every node.
One more possibility can be compatibility issues. Imagine you upgrade kubectl on your local machine to a newer version that is incompatible(or a default behaviour is changed) with one of your older k8 clusters.
So in a way it is kind of ensuring that there is always a compatible version of kubectl running on the nodes.
Note: in ideal situations commands and apis should be backward compatible.

Does my docker-machine AWS EC2 instances show up on docker-cloud web API?

I am a little confused with all the different offerings by docker.
So far, I have been using Docker Cloud Web API (cloud.docker.com) to create node-clusters on EC2 instances by linking to my AWS account.
Now recently, I wanted to setup a data container and mount is as a volume, that is shared by other containers running on the same node. This requires use of the --volumes-from flag in docker, which means I need to use docker-machine, connect to my AWS VM, and then launch my containers with this flag.
Do all of these containers show up on cloud.docker.com? Even the ones I launched from the terminal using docker-machine? Maybe I am confused here..
I found out that cloud.docker.com is still in Beta mode and so doesn't offer a --volumes-from. Also, these containers don't show on cloud.docker.com yet. Maybe it will come in the future...

Deploying multiple Deis clusters

I am looking to create a number of Deis clusters running in parallel on AWS and haven't been able to find any good documentation on how to do so. From what I understand I'd have to do the following:
When provisioning the cluster:
Create a new discovery URL
Give the stack a different name other than the standard "deis" when using the ./provision-aws-cluster.sh script
Create different Deis profiles in $HOME/.deis/client.json that map to each cluster
And when utilizing the deisctl and deis command line interfaces, I need to specify the DEISCTL_TUNNEL and the DEIS_PROFILE each time, respectively.
Am I missing anything? Will this impact my current Deis cluster if I install using the the changes listed above?
That is correct, I don't believe you are missing anything. You should save the cloud-config for each cluster (in contrib/coreos), that will have the discovery url in it and possibly other customizations depending on how your clusters will be configured. If the clusters are going to be different on the AWS side, make sure you save the cloudformation.json file for each as well.

Creating AWS RDS instance using Chef cookbook

Forgive me if my question is too vague. I am new to both AWS and Chef automation tool. I am trying to create an RDS instance on AWS using Chef automation. I want the details of the RDS instance to be in the cookbook and I do not want to go through the AWS console. I did some research and found a community cookbook that does this:
https://github.com/gosuri/aws-rds-cookbook/blob/master/README.md
In my experience with Chef, I always had a node that I did sudo to, and it made sure that the node was following the policies listed in the cookbook.
I am confused here as I do not even have a node in the first place. I am trying to create one using cookbook. Is this possible? Can someone point me in the right direction?
Chef is an agent-based system, so you need a node of some kind. With tools like chef-provisioning (you would want to use the AWS driver in this case) you sometimes use your workstation and run chef-client from there, or make a dedicated "provisioning node" which basically just sits there and does nothing but run provisioning recipes.