Deployment using Kubernetes - feasibility of Master And Nodes in same machine - amazon-web-services

I am trying to deploy my microservices using Kubernetes. And also I have one Ubuntu 16.04 machine as AWS Ec2. In that AWS Ec2 I need to use Kubernetes and deploy my microservices developed using Spring Boot. I already explored the architecture of Kubernetes. But when I am learning how to install Kubernetes in Ubuntu.
It showing that need at least two machine, one for master and another for nodes (worker machines). I am adding the one or two links that I readed for installing Kubernetes below:
https://medium.com/#Grigorkh/install-kubernetes-on-ubuntu-1ac2ef522a36
https://medium.com/#SystemMining/setup-kubenetes-cluster-on-ubuntu-16-04-with-kubeadm-336f4061d929
And I am here need to clarify my confusions related with Kubernetes and its installation. I am adding confusions below section:
Can I use one Ubuntu 16.04 machine for both master and worker for my microservice deployment?
Can I integrate Kubernetes with Jenkins in the same ubuntu 16.04 machine, since I am planning to choose Ec2 Ubuntu 16.04 LTS for this?
If master and node in same machine is possible (doubt 1), then how I can create different number of nodes when I am initializing my cluster by using kubeadm init?
I am only a beginner with this.

Let's clarify one by one.
Can I use one ubuntu 16.04 machine for both master and worker for my microservice deployment?
Yes, you can use one server for all components, but if you will run your master and node in different VMs or containers. Theoretically, it is possible to create all-in-one server without that, but its a tricky way and I don't recommend it to you.
Can I integrate Kubernetes with Jenkins in the same ubuntu 16.04 machine? , Since I am planning to choose Ec2 Ubuntu 16.04 LTS for this
You can, as example, install Jenkins inside a Kubernetes, or install it somewhere else and integrate. So - yes, you can. Here is one of the articles about it.
If master and node in same machine is possible (Doubt 1), Then How I can create different number of nodes when I am initializing my cluster by using kubeadm init?
You cannot create multiple nodes on a single machine without docker-in-docker solution or VMs.
Actually, I highly recommend you Minikube for single node Kubernetes. It will automatically create for you a local cluster in VMs in one click.

Related

How can you run a proxmox server on a ubuntu EC2 instance

I would like to run a proxmox server on a Ubuntu EC2 Instance.
I know this may sound crazy but I do not have any spare hardware to run a promox server on. Would it be possible to run this on a Ubuntu EC2 Instance?
If i was to download proxmox on a flash drive, can i insert it into my computer and install it (overiding) the ubuntu instance and just using the hardware? Is this possible AWS?
It is possible to run Proxmox on EC2, but if you want to host VM guests you need to run on an instance type that supports nested virtualisation, which is only the "metal" instances. These start at about $4/hour.
Running containers works fine on any standard x64 instance type, though.
I posted a guide to installing Proxmox on EC2 here:
https://github.com/thenickdude/proxmox-on-ec2
The tricky parts that the guide fixes up automatically are harmonising the network configuration generated by Debian's cloud-init package with Proxmox's nonstandard ifupdown2 package.

How to Deploy production grade Jenkins server?

I want to deploy Jenkins on AWS but I don't know what is the best way to deploy it.
I watched videos and read articles and below are some possible solutions.
1- Jenkins as a docker container, bind the volume and expose it with ELB or Nginx reverse proxy.
2- Jenkins on EKS or unmanaged K8s cluster and expose it via ELB.
3- Install as a regular application via an apt-get command in EC2 and expose it via Nginx reverse/ELB.
I don't know what is the best way to deploy in production.
P.S: next plan is to deploy our Nexus and Sonarqube servers as well.
Thanks in advance :)
Well it completely depends on many factors such as:
Cost
Number of users (High availability)
If cost isn't a big issue, you can use Amazon EKS cluster to deploy highly available Jenkins server.
Checkout this guide on how to do it on EKS.

AWS: web server: Is it better to use Docker or configure the server on EC2 instance

I am using archlinux on my development. I am trying to use a free tier AMI for EC2 in AWS.
I have found Amazon linux 2 as one of the AMI's
I didnt find arch linux AMI in free tier.
I know using docker i can still use archlinux and keep the environment same
The reason why i want to use arch is i am familiar with the package management which is very crucial for ease on any particular linux distribution.
So will using Docker effect AWS performance and is Docker worth using at all.
Or should i get used to the AMI linux distribution.
If you like Archlinux use the Archlinux Docker.
The Docker overhead is very small.
Using Docker will also make it easy to port your setup to any location: other cloud, desktop, other OS.
Docker is perfect to go. Further, consider that, in different regions, you can use the AWS fargate. It allows you to start docker containers (scaling them up and down, etc) without having to manage servers (EC2 instances).

When using hyperedger with virtualbox, can i have some instances at one network?

I am a beginner in a block chain. So there are many questions.
When configuring the hyperledger network, create multiple ubuntu instances using virtualbox on one pc. Can the peers within each instance be connected in a single block-chain network?
Thanks in advance for your availability.
You are correct in identifying that the Peers need to communicate with each other. In the default Development Fabric, all the docker containers are running on a single machine and the network addressing/routing is managed by Docker Compose. If you split your fabric to separate Virtualbox Ubuntu instances, you will have to understand and manage the network addressing/routing. This is a Docker and networking issue, not really a Fabric or Composer issue. You may find that Kubernetes is the most helpful way forward for you or Docker Swarm.
For that, you have to create your fabric network and then add everyone to the network. You need to configure each and every peer in the channel config using configtxgen tool.

How can I run Docker in a AWS Windows Server environment?

Thing I'd tried:
Toolbox on Windows Server 2012 R2. Disabled Hyper-V to allow virtualbox. I cannot enable virtualization as it's on the physical bios.
Installed Docker EE on Windows Server 2016 w/Containers EC2. Installed correctly. Daemon is running. BUT, I can't pull a single image beside the hello-world:nanoserver. So I hunted down the windowsservercore and nanoserver, still doesn't work because they are out of date. The repo from the frizzm person at Docker.com doesn't work when you try to pull it.
Started again with a fresh Windows Server 2016 instance. I disabled Hyper-V and installed ToolBox. Doesn't work.
How do I run Docker in a windows server environment in AWS?
All of the vids/tuts seem so simple, but I sure can't get it to work. I'm at a lose.
You don't actually need to install Docker for Windows (formerly known as the Docker Toolbox) in order to utilize Docker on Windows Server.
First, it's important to understand that there are two different types of containers on the Windows Server 2016 platform: Windows Containers and Hyper-V containers.
Windows Containers - runs on top of the Windows Server kernel, no virtual machines used here
Hyper-V Containers - virtual machine containers, each with their own kernel
There's also a third option that runs on top of Hyper-V called Linux Containers on Windows (LCOW), but we won't get into that, as it appears you're specifically asking about Windows containers.
Here are a couple options you can look at:
Bare Metal Instances on AWS
If you absolutely need to run Windows Hyper-V containers on AWS, or want to run Linux containers with Docker for Windows, you can provision the i3.metal EC2 instance type, which is a bare metal instance. You can deploy Windows Server 2016 onto the i3.metal instance type, install Hyper-V, and install Docker for Windows. This will give you the ability to run both Linux containers (under a Hyper-V Linux guest), Hyper-V containers, and Windows containers.
ECS-Optimized AMI
Amazon provides an Amazon Machine Image (AMI) that you can deploy EC2 instances from, which contains optimizations for the Amazon Elastic Container Service (ECS). ECS is a cloud-based clustering service that enables you to deploy container-based applications across an array of worker nodes running in EC2.
Generally you'll use ECS and the ECS-optimized AMI together to build a production-scale cluster to deploy your applications onto.
Windows Server 2016 with Containers AMI
There's also a "Windows Server 2016 with Containers" AMI available, which isn't the same as the ECS-optimized AMI, but does include support for running Docker containers on Windows Server 2016. All you have to do is deploy a new EC2 instance, using this AMI, and you can log into it and start issuing Docker commands to launch Windows containers. This option is most likely the easiest option for you, if you're new to Windows containers.
EC2 instances do not allow for nested virtualization (EC2 instances are themselves virtual machines). Docker for Windows uses Hyper-V under the hood, and Docker Toolbox uses Virtualbox under the hood, so neither of those solutions are viable.
Even if you were able to run them on a Windows EC2 instance, the performance wouldn't be that great due to the fact that Docker for Windows mounts files into the Docker VM via Samba, which is not very fast.
If you want to run Linux containers, you should probably run them on Linux. It's very fast to get set up, and all of the Docker commands that you're used to with Docker for Windows should still work.
It is possible to run docker on windows. Run the following command to set it up.
docker-machine create --driver amazonec2 aws01
What this command does is create a new EC2 linux instance, and connects up docker to that linux instance. When docker commands are run on your windows instance the docker commands actually are sent to the linux instance, executed, and the results are returned to the windows EC2 instance.
Here's Docker's documentation on it. I hope this helps.
https://docs.docker.com/machine/drivers/aws/#aws-credential-file
I know this contradicts your question a little; but you might also consider running it on one of the new ec2 Mac OS instances, which are bare metal. Worked for me.