I have been investigating a combination of Spinnaker, Spring Boot/Cloud and Amazon Web services for implementing a continuous delivery of microservices for my new employer.
My biggest issue is separation of the different environments in AWS and determination of that environment by the Spring Boot/Cloud microservice when it is going through the pipeline from code checkin to production.
What are the best practices for separating the different environments in AWS? I have seen separate sub-accounts, use of VPC to separate the environments.
Then the next step involves determination of the environment by the microservice when it starts up. We are planning on using Spring Cloud Config Server to provide runtime configuration to the microservices. The determination of environment when contacting the config server needs to be made to set the 'label' when asking the config server for configuration.
Assumptions I am making:
1. Spinnaker, or some other pipeline capable tool is being used to push the artifact into the various environments.
2. The original artifact is a self contained jar, that ideally would be incorporated into an AWS AMI, and then pushed without modification into the subsequent deployment environments.
3. The AMI can be built with whatever scripting is necessary to determine the runtime environment, and provide that info to the jar when it is started.
4. The instance is being started by an auto-scaler, and will be starting the instance(AMI), without providing external info when being started.
I think that is enough for now. This could probably be split into multiple questions, but I wanted to maintain a cohesive whole for the process.
Thanks for the assistance.
Related
I'm new in Kubernetes:
I have a multiple independent servers, they are based on spring boot java.
Each server has a separate independent database, where database connection details are written in application.yml.
And I was wondering if i deploy in Kubernetes,
should I have a let say 15 different deployments , basically for each application.yml?
Could you please suggest the general flow or picture?
Flexibility comes when there is less or no dependency, so yes each service should be deployed and managed with its own deployment. So deployments just manage pod, and the pod is the smallest unit of the Kubernetes application, for example, let's say we have two services login and user service. Both have different container image so we need two different pods which mean two different deployments.
It will help to scale, rollout, clean and update independently. Plus in future, if you place monitoring etc it help you to identify object from the deployment which having issue.
kubernetes-deployment
Services like ArgoCD which work based on GitOps approach, sync applications from the git repository so in that case, it is also easier to sync applications independently.
In addition to that, Better to use a helm each service will represent a helm chart.
I have a Django Web Application which is not too large and uses the default database that comes with Django. It doesn't have a large volume of requests either. Just may not be more than 100 requests per second.
I wanted to figure out a method of continuous deployment on AWS from my source code residing in GitHub. I don't want to use EBCLI to deploy to Elastic Beanstalk coz it needs commands in the command line and is not automated deployment. I had tried setting up workflows for my app in GitHub Actions and had set up a web server environment in EB too. But it ddn't seem to work. Also, I couldn't figure out the final url to see my app from that EB environment. I am working on a Windows machine.
Please suggest the least expensive way of doing this or share any videos/ articles you may hae which will get me to my app being finally visible on the browser after deployment.
You will use AWS CodePipeline, a service that builds, tests, and deploys your code every time there is a code change, based on the release process models you define. Use CodePipeline to orchestrate each step in your release process. As part of your setup, you will plug other AWS services into CodePipeline to complete your software delivery pipeline.
https://docs.aws.amazon.com/whitepapers/latest/cicd_for_5g_networks_on_aws/cicd-on-aws.html
I want to migrate Mule applications deployed on Mule standalone (on-Premise) to Anypoint Runtime Fabric (RTF) Self managed Kubernetes on AWS, but I could not find any document on this.
Any ideas or any document available on this please share it.
Thanks in advance
Mule applications run exactly the same on-prem, on CloudHub or in Anypoint Runtime Fabric. It is only if your applications make assumptions about their environment that you are going to need to make adjustments. For example any access to the filesystem (reading a file from some directory) or some network access that is not replicated to the Kubernetes cluster. A common mistake is when developers use Windows as the development environment and are not aware that the execution in a container environment will be different. You may not be aware of those assumptions. Just test the application and see if there are any issues. It is possible it will run fine.
The one exception is if the applications share configurations and/or libraries through domains. Since applications in Runtime Fabric are self isolated, domains are not supported. You need to include the configurations into each separate applications. For example you can not have an HTTP Listener config where several applications share the same TCP Port to listen to incoming requests. That should be replaced by using Runtime Fabric inbound configurations.
About the deployment, when you deploy to a new deployment model, it is considered a completely new application, with no relationship to the previous one. There is no "migration" of deployments. You can deploy using Runtime Manager or Maven. See the documentation. Note that the documentation states that to deploy with Maven you first must publish the application to Exchange.
Yes, you can.
In general, it is an easy exercise. However, things may go a little bit complicated when you have lots of dependencies on the persistent object store. It may require slight code refactoring in the worst case scenario. If you are running on-prem in cluster mode, then you are using HazelCast which is also available in RTF.
Choosing the Self-managed Kubernetes in EKS have some extra responsibilities. If you and your team have good expertise on Kubernetes and AWS then it is a great choice. Keep in mind that the Anypoint runtime console allows at most 8 replicas for each app. However, if you are using CI/CD pipeline, you should be able to scale it more.
There is no straightforward documentation as the majority of work is related to setup your EKS and associated network, ports, ingress, etc.
Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops (project link) into local Kubernetes cluster (setup using kubeadm)?
My thinking is that if the application runs in k8s cluster based on AWS EC2 instances, it should also run in local k8s cluster as well. I am trying it locally for testing purposes.
Heres what I have tried so far but it is not working.
First I set up my local 2-node cluster using kubeadm
Then I modified the installation script of the project (link given above) by removing all the references to EC2 (as I am using local machines) and kops (particularly in their create_cluster.py script) state.
I have modified their application yaml files (app requirements) to meet my localsetup (2-node)
Unfortunately, although most of the application pods are created and in running state, some other application pods are unable to create and therefore, I am not being able to run the whole application on my local cluster.
I appreciate your help.
It is the beauty of Docker and Kubernetes. It helps to keep your development environment to match production. For simple applications, written without custom resources, you can deploy the same workload to any cluster running on any cloud provider.
However, the ability to deploy the same workload to different clusters depends on some factors, like,
How you manage authorization and authentication in your cluster? for example, IAM, IRSA..
Are you using any cloud native custom resources - ex, AWS ALBs used as LoadBalancer Services
Are you using any cloud native storage - ex, your pods rely on EFS/EBS volumes
Is your application cloud agonistic - ex using native technologies like Neptune
Can you mock cloud technologies in your local - ex. Using local stack to mock Kinesis, Dynamo
How you resolve DNS routes - ex, Say you are using RDS n AWS. You can access it using a route53 entry. In local you might be running a mysql instance and you need a DNS mechanism to discover that instance.
I did a google search and looked at the documentation of kOps. I could not find any info about how to deploy to local, and it only supports public cloud providers.
IMO, you need to figure out a way to set up your local EKS cluster, and if there are any usage of cloud native technologies, you need to figure out an alternative way about doing the same in your local.
The true answer, as Rajan Panneer Selvam said in his response, is that it depends, but I'd like to expand somewhat on his answer by saying that your application should run on any K8S cluster given that it provides the services that the application consumes. What you're doing is considered good practice to ensure that your application is portable, which is always a factor in non-trivial applications where simply upgrading a downstream service could be considered a change of environment/platform requiring portability (platform-independence).
To help you achieve this, you should be developing a 12-Factor Application (12-FA) or one of its more up-to-date derivatives (12-FA is getting a little dated now and many variations have been suggested, but mostly they're all good).
For example, if your application uses a database then it should use DB independent SQL or no-sql so that you can switch it out. In production, you may run on Oracle, but in your local environment you may use MySQL: your application should not care. The credentials and connection string should be passed to the application via the usual K8S techniques of secrets and config-maps to help you achieve this. And all logging should be sent to stdout (and stderr) so that you can use a log-shipping agent to send the logs somewhere more useful than a local filesystem.
If you run your app locally then you have to provide a surrogate for every 'platform' service that is provided in production, and this may mean switching out major components of what you consider to be your application but this is ok, it is meant to happen. You provide a platform that provides services to your application-layer. Switching from EC2 to local may mean reconfiguring the ingress controller to work without the ELB, or it may mean configuring kubernetes secrets to use local-storage for dev creds rather than AWS KMS. It may mean reconfiguring your persistent volume classes to use local storage rather than EBS. All of this is expected and right.
What you should not have to do is start editing microservices to work in the new environment. If you find yourself doing that then the application has made a factoring and layering error. Platform services should be provided to a set of microservices that use them, the microservices should not be aware of the implementation details of these services.
Of course, it is possible that you have some non-portable code in your system, for example, you may be using some Oracle-specific PL/SQL that can't be run elsewhere. This code should be extracted to config files and equivalents provided for each database you wish to run on. This isn't always possible, in which case you should abstract as much as possible into isolated services and you'll have to reimplement only those services on each new platform, which could still be time-consuming, but ultimately worth the effort for most non-trival systems.
I have created a Spring cloud microservices based application with netflix APIs (Eureka, config, zuul etc). can some one explain me how to deploy that on AWS? I am very new to AWS. I have to deploy development instance of my application.
Do I need to integrate docker before that or I can go ahead without docker as well.
As long as your application is self-contained and you have externalised your configurations, you should not have any issue.
Go through this link which discusses what it takes to deploy an App to Cloud Beyond 15 factor
Use AWS BeanStalk to deploy and Manage your application. Dockerizing your app is not a predicament inorder to deploy your app to AWS.
If you use an EC2 instance then it's configuration is no different to what you do on your local machine/server. It's just a virtual machine. No need to dockerize or anything like that. And if you're new to AWS, I'd rather suggest to to just that. Once you get your head around, you can explore other options.
For example, AWS Beanstalk seems like a popular option. It provides a very secure and reliable configuration out of the box with no effort on your part. And yes, it does use docker under the hood, but you won't need to deal with it directly unless you choose to. Well, at least in most common cases. It supports few different ways of deployment which amazon calls "Application Environments". See here for details. Just choose the one you like and follow instructions. I'd like to warn you though that whilst Beanstalk is usually easier then EC2 to setup and use when dealing with a typical web application, your mileage might vary depending on your application's actual needs.
Amazon Elastic container Service / Elastic Kubernetes Service is also a good option to look into.
These services depend on the Docker Images of your application. Auto Scaling, Availability cross region replication will be taken care by the Cloud provider.
Hope this helps.