I'm new in Kubernetes:
I have a multiple independent servers, they are based on spring boot java.
Each server has a separate independent database, where database connection details are written in application.yml.
And I was wondering if i deploy in Kubernetes,
should I have a let say 15 different deployments , basically for each application.yml?
Could you please suggest the general flow or picture?
Flexibility comes when there is less or no dependency, so yes each service should be deployed and managed with its own deployment. So deployments just manage pod, and the pod is the smallest unit of the Kubernetes application, for example, let's say we have two services login and user service. Both have different container image so we need two different pods which mean two different deployments.
It will help to scale, rollout, clean and update independently. Plus in future, if you place monitoring etc it help you to identify object from the deployment which having issue.
kubernetes-deployment
Services like ArgoCD which work based on GitOps approach, sync applications from the git repository so in that case, it is also easier to sync applications independently.
In addition to that, Better to use a helm each service will represent a helm chart.
Related
I have been working on a web app for a few months and now it's ready for deployment. My frontend and backend are in different docker containers (and different repos as well). I use docker-compose to communicate between the two containers and for nginx. Now, I want to deploy my app to AWS and I'm thinking of 2 approaches, but I don't know which one is better:
Deploy the 2 containers separately (as 2 different apps) so that it's easier for me to make changes/maintain each of them, and I also read somewhere that this approach is more secured.
Deploy them as a single app for simpler deployment process, but other than that, I can't really think of anything good about this approach.
I'm obviously leaning more toward the first approach, but if anyone could give me more insights on the pros and cons of both approaches, I would highly appreciate! I am trying to make this process as professional as possible so I can learn more about devOps.
So what docker-compose does under the hood:
Create a docker network
Put all containers in this network
Sets up DNS names, so containers can find each other using their names
This can also be achieved with ECS (which seems suitable for your use case).
So create an ECS Cluster with Fargate as the capacity provider (allowing you to work serverless and don't have to care about ec2 instances)
ECS works with task definitions, so you can create a task definition containing your backend and frontend and create a service based on the definition.
All containers defined in one task work exactly like docker-compose, ECS creates a docker network for them, and they are basically in the same network.
Also see:
AWS Docs for ECS task definitions
AWS Docs for launch types
If you just want to use nginx in front of your service for load balancing, maybe using an application load balancer will be a better choice.
I want to migrate Mule applications deployed on Mule standalone (on-Premise) to Anypoint Runtime Fabric (RTF) Self managed Kubernetes on AWS, but I could not find any document on this.
Any ideas or any document available on this please share it.
Thanks in advance
Mule applications run exactly the same on-prem, on CloudHub or in Anypoint Runtime Fabric. It is only if your applications make assumptions about their environment that you are going to need to make adjustments. For example any access to the filesystem (reading a file from some directory) or some network access that is not replicated to the Kubernetes cluster. A common mistake is when developers use Windows as the development environment and are not aware that the execution in a container environment will be different. You may not be aware of those assumptions. Just test the application and see if there are any issues. It is possible it will run fine.
The one exception is if the applications share configurations and/or libraries through domains. Since applications in Runtime Fabric are self isolated, domains are not supported. You need to include the configurations into each separate applications. For example you can not have an HTTP Listener config where several applications share the same TCP Port to listen to incoming requests. That should be replaced by using Runtime Fabric inbound configurations.
About the deployment, when you deploy to a new deployment model, it is considered a completely new application, with no relationship to the previous one. There is no "migration" of deployments. You can deploy using Runtime Manager or Maven. See the documentation. Note that the documentation states that to deploy with Maven you first must publish the application to Exchange.
Yes, you can.
In general, it is an easy exercise. However, things may go a little bit complicated when you have lots of dependencies on the persistent object store. It may require slight code refactoring in the worst case scenario. If you are running on-prem in cluster mode, then you are using HazelCast which is also available in RTF.
Choosing the Self-managed Kubernetes in EKS have some extra responsibilities. If you and your team have good expertise on Kubernetes and AWS then it is a great choice. Keep in mind that the Anypoint runtime console allows at most 8 replicas for each app. However, if you are using CI/CD pipeline, you should be able to scale it more.
There is no straightforward documentation as the majority of work is related to setup your EKS and associated network, ports, ingress, etc.
There are several tutorials on how to deploy a containerized service to the cloud: AWS, Google Cloud Platform, Heroku, and many others all have nice tutorials on how to do this.
However, most real-world apps are made of two or more services (for example a database + a web server), rather than just one service.
Is it bad practice to deploy the various services of a multi-service app to different clusters (e.g. deploy the database to a GKE cluster, and the web server to another GKE cluster)? I'm asking this because I am finding it very difficult to deploy a simple web app to a single cluster, while I was expecting that once I set up my Dockerfiles and docker-compose.yml everything would work out-of-the-box (as advertised by the documentations of Docker Compose and Kubernetes) and I would be able to have a small cluster with 1 container for my database and 1 container for my web server.
So my questions are:
Is it bad practice to deploy the various services of a multi-service app to different clusters?
What is, in general, the de-facto standard way to deploy a web app with a database and a web server to the cloud? What are the easiest tools to achieve this?
Practically, what is the simplest way I can deploy a React + Express + MongoDB app to any cloud provider with a free-tier account?
Deploying multiple services (AKA applications) that shares some logic between them on the same cluster/namespace is actually the best practice. I am not sure why you find it difficult, but you could take a container orchestrator platform, such as Kubernetes and deploy as many applications as you want - in the same project on the same cluster.
I would recommend getting into a cloud platfrom that serves a Container Orchestrator such as Google Container Engine of Google Cloud Platform (or any other cloud platform you want) and start exploring around. You can also read about containers overall or Kubernetes.
So, practically speaking, I would probably create MongoDB and the express app inside the same namespace (and every other service or application related to the project on another container within the same namespace).
I was looking around aws code deploy to perform deployment. My application will have multiple services like apache, tomcat, database, cassandra, kafka and etc..
Each services will run in different machines. As per my knowledge it looks like i need to create different deployment group for each services(because each services running in different instances) and different deployment for each services.
So for example, if i have around 5 different services in my application and each running in different instance, do i need to create 5 deployment group and 5 different deployment in aws code deploy ? Is there any option to perform deployment for all the services using a single deployment/appspec file ? I would like to get some ideas from experts on how we can accomplish this effectively.
If I understand your use case correctly, you have an application that is made up of 5 different services. Every service runs on a different set of instances. You want to know if you can deploy all 5 services to a different set of instances with one deployment? That's not possible without doing some really hacky things in your appspec.yml, which would not be recommended. A deployment is tied to one deployment group and one revision.
The Recommended Way
If you want to do this with CodeDeploy, you can break it up with applications or deployment groups. There is no way to have a deployment group with a mix of different services for different instances.
Ideally, each service in your application has it's own CodeDeploy application, and each application would have one deployment group. You could use a deployment group for that as well in the same application, but conceptually, a deployment group is intended to break up a fleet with the same service running, though it doesn't really matter.
Installing applications you don't code
If you're talking about installing applications you don't own like a DB or Apache, the EC2 user data is probably the place you want to install that with.
I am very new to Kubernetes so apologies for gaps in my understanding and possibly incorrect wording.
I am developing on my local MacBook Pro, which is somewhat resource constrained. My actual payload is a database, which is already running in a Docker container, but obviously needs some sort of persistent storage.
The individual containers also need to talk to each over network and some of them need a channel (port open) to the outside world.
I would like to set up a single Kubernetes cluster for dev and testing purposes that I can later easily deploy to to bare metal servers or a cloud vendor - Google and AWS.
From reading so far it looks that I can, for example use minikube and orchestrate that cluster on top VirtualBox that I am already running.
How would that then map to an actual deployment in the cloud?
What additional tools do I need to get it all running, especially with regards to persistent storage and network?
Will it map easily to the cloud?
What configuration management software would you recommend to maintain all that configuration?
A very short answer is that it's hard to do this properly.
One of the best options I know of is LinuxKit, it allows you to build identical images that you can run on any of the popular cloud providers or in a data centre of your own, or desktop hypervisor. In fact, this is what Docker for Mac is based on.
Disclaimer: I am one of the LinuxKit contributors.
Generally you get more or less the same kubernetes, regardless of the method you spin up the cluster. Although, comparing to cloud, other deployments will usually lack in what cloud provides by default with kubes built-in cloud providers. Some very important features it relates to are things like out of the box support for LoadBalancer type of services or automatic PersistentVolume provisioning.
If you're ok with not having them, or configuring them additionally for your dev/test env then you should be quite fine.
In scope of PVC/PV, the lack of automatic PV provisioner (unless you set up something like ie. GlusterFS with Heketi to support this) will mean that you will have to provision every PV manualy on the dev/test cluster in opposite to ability of this happening in automatic fashion on cloud.
Also, as you begin, there are ought to be some minor differences between your dev/test setup and prod, so you might really want to investigate manifest templating and management solutions like helm from thew day one of your work with deployments to kubernetes. I know it would save m a lot of headache if I did that my self when I started doing kube.
Focusing a bit on your inquiry on the database, I think you have two options (assuming cloud is still an option for you):
use a docker database image and mount volumes
use an RDS instance in case of aws
I believe that in case of databases the case of volumes is generally not recommended.
What I would suggest you do is (once you grasp a bit the basic concepts, mainly Services, to
create an RDS instance and your needed databases therein
expose this RDS instance as a Service as type ExternalName
I have been doing the following and so far is working:
apiVersion: v1
kind: Service
metadata:
name: my-database-service
namespace: some-namespece
spec:
type: ExternalName
externalName: <my-rds-endpoint>
After that, you rest of k8s services can reach this service via my-database-service
I think this approach is more db-wise consistent and saves the volumes' hussle.
That being said, I acknowledge that the guidelines in terms of "select-this-if-you-go-for-cloud" or "that-if-you-go-on-prem" are not quite clear yet.
My experience so far indicates that:
most likely for on prem (not just your localhost) the way to go is kubeadm
for aws I have been having a pleasant experience with kops so far.
there is also the Canonical solution that seems to use a stack (conjure-up/juju) to help deploy their own slightly modified version of Kubernetes that they claim suits both cloud/on-prem (haven't tried it at all).