It's been a little unclear to me what requirements Istio-pilot using Consul adapter are. I am trying to setup and have istio-pilot Discovery to act as pure Envoy xDS. However, in one of the examples where Consul is used (from Istio src), it does install one kube-apiserver (and etcd for that matter). I would like to use Envoy as the data-plane (or istio-pilot agent for that matter), but leverage Consul for service discovery, and not integrate with Kubernetes. Does istio-pilot require K8 anyway for that use case?
Istio supports several different so called ServiceDiscovery implementations.
Kubernetes is one of them which discovers Services from Kubernetes Services.
But this is really just one of the possible ways to run Istio Pilot and you can use other ServiceDiscovery mechanisms line Consul via the command line argument --registries Consul.
See https://archive.istio.io/v1.4/docs/reference/commands/pilot-discovery/ for a detailed description of the command line arguments.
Once you run Pilot with that configuration it should load Services exclusively from Consul. These should be pushed to the data plane under the usual name <service name>.service.consul.
UPDATE:
From your comment below it seems that you not only want to not load Services from Kubernetes, but in general completely run without it.
While this indeed doesn't seem to be possible with 1.4 – i.e. watching Istio resources is always started – it seems to work with 1.5.
To achieve that you to start pilot with --disable-install-crds and --configDir
<config path> where <config path> points to a directory containing the yamls for the Istio specific resources that you might still need, like Sidecars, MeshPolicy, EnvoyFilter etc.
If --configDir is not defined Pilot will still try to get these resources from Kubernetes, so it is essential to add this argument even if the directory is empty.
Finally you should make sure that the MeshConfig that you pass to pilot via --meshConfig meshconfig.yaml does not point to a URL of galley by commenting this out, in case you copied an existing file /etc/istio/config/mesh from a running instance of Pilot:
configSources:
#- address: istio-galley.istio-system.svc:9901
# tlsSettings:
# mode: ISTIO_MUTUAL
Related
Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops (project link) into local Kubernetes cluster (setup using kubeadm)?
My thinking is that if the application runs in k8s cluster based on AWS EC2 instances, it should also run in local k8s cluster as well. I am trying it locally for testing purposes.
Heres what I have tried so far but it is not working.
First I set up my local 2-node cluster using kubeadm
Then I modified the installation script of the project (link given above) by removing all the references to EC2 (as I am using local machines) and kops (particularly in their create_cluster.py script) state.
I have modified their application yaml files (app requirements) to meet my localsetup (2-node)
Unfortunately, although most of the application pods are created and in running state, some other application pods are unable to create and therefore, I am not being able to run the whole application on my local cluster.
I appreciate your help.
It is the beauty of Docker and Kubernetes. It helps to keep your development environment to match production. For simple applications, written without custom resources, you can deploy the same workload to any cluster running on any cloud provider.
However, the ability to deploy the same workload to different clusters depends on some factors, like,
How you manage authorization and authentication in your cluster? for example, IAM, IRSA..
Are you using any cloud native custom resources - ex, AWS ALBs used as LoadBalancer Services
Are you using any cloud native storage - ex, your pods rely on EFS/EBS volumes
Is your application cloud agonistic - ex using native technologies like Neptune
Can you mock cloud technologies in your local - ex. Using local stack to mock Kinesis, Dynamo
How you resolve DNS routes - ex, Say you are using RDS n AWS. You can access it using a route53 entry. In local you might be running a mysql instance and you need a DNS mechanism to discover that instance.
I did a google search and looked at the documentation of kOps. I could not find any info about how to deploy to local, and it only supports public cloud providers.
IMO, you need to figure out a way to set up your local EKS cluster, and if there are any usage of cloud native technologies, you need to figure out an alternative way about doing the same in your local.
The true answer, as Rajan Panneer Selvam said in his response, is that it depends, but I'd like to expand somewhat on his answer by saying that your application should run on any K8S cluster given that it provides the services that the application consumes. What you're doing is considered good practice to ensure that your application is portable, which is always a factor in non-trivial applications where simply upgrading a downstream service could be considered a change of environment/platform requiring portability (platform-independence).
To help you achieve this, you should be developing a 12-Factor Application (12-FA) or one of its more up-to-date derivatives (12-FA is getting a little dated now and many variations have been suggested, but mostly they're all good).
For example, if your application uses a database then it should use DB independent SQL or no-sql so that you can switch it out. In production, you may run on Oracle, but in your local environment you may use MySQL: your application should not care. The credentials and connection string should be passed to the application via the usual K8S techniques of secrets and config-maps to help you achieve this. And all logging should be sent to stdout (and stderr) so that you can use a log-shipping agent to send the logs somewhere more useful than a local filesystem.
If you run your app locally then you have to provide a surrogate for every 'platform' service that is provided in production, and this may mean switching out major components of what you consider to be your application but this is ok, it is meant to happen. You provide a platform that provides services to your application-layer. Switching from EC2 to local may mean reconfiguring the ingress controller to work without the ELB, or it may mean configuring kubernetes secrets to use local-storage for dev creds rather than AWS KMS. It may mean reconfiguring your persistent volume classes to use local storage rather than EBS. All of this is expected and right.
What you should not have to do is start editing microservices to work in the new environment. If you find yourself doing that then the application has made a factoring and layering error. Platform services should be provided to a set of microservices that use them, the microservices should not be aware of the implementation details of these services.
Of course, it is possible that you have some non-portable code in your system, for example, you may be using some Oracle-specific PL/SQL that can't be run elsewhere. This code should be extracted to config files and equivalents provided for each database you wish to run on. This isn't always possible, in which case you should abstract as much as possible into isolated services and you'll have to reimplement only those services on each new platform, which could still be time-consuming, but ultimately worth the effort for most non-trival systems.
There is a project migrated from legacy to GCP.
On GCP everything runs on microservices.
May be around 40-50 microservice.
I would like to automate this microservices but there is no endpoint exposed in this project.
How could you automate a microservice where there are no endpoints?
What type of architecture, you could use to test this?
Db: Firestore (nosql)
Thanks
M
In my view you can do it following way:
Use ClusterIP or NodePort to access those POD.
Spin up a new POD which will access your target POD to communicate.
You can enable POD to POD communication based on labels and enabling network policy.
You can use calico as network policy agent.
You can view the log of your testing pod using kubectl logs [pod name] or from logging service of your cloud provider or even using daemonset that you could install.
The testing POD can periodically send traffic. So you can use thread to call the target service and keep the thread in sleep mode for a while or you can use kubernetes cronjob to call the target service. Based on your usecase it will be chosen.
Let me know if it meets your requirements or you have more to elaborate?
In terms of finding out how to test your micro-services on the Google Cloud Platform, I would suggest referencing our documentation on "Microservices Architecture on Google App Engine" as it will explain and guild you how to implement your services onto GCP. You may also look into this document as well as it provides the best practices for designing APIs to communicate between microservices.
Additionally, user "ARINDAM BANERJEE" has a great example you can follow as well.
I'm writing a k8s operator, with the knowledge of current cloud provider the k8s is currently running on, I can do some platform-specific tasks for users, such as prepare some default storage classes for users.
but how can an operator running in the k8s cluster know it is GCP or AWS?
After scanning through the APIs, the cloud provider leaves some clues here and there, for example, for the GKE cluster I am running now, it has an API named: /apis/nodemanagement.gke.io/v1alpha1
but I think it's a little bit too hack, and wonder if there is any more formal way to get this info.
No, this is not exposed in a consistent way. You should have the use put it in their config file or whatever.
Indeed, it's not consistent. When the configuration is added by default to kubectl, you have these patterns:
> kubectl config current-context
# For GCP
> gke_gbl-imt-homerider-basguillaueb_europe-west1-b_my-first-cluster-1
# For AWS
> arn:aws:eks:eu-west-1:306974639454:cluster/demo-knative
You can also rename the config is you prefer your own pattern.
I am very new to Kubernetes so apologies for gaps in my understanding and possibly incorrect wording.
I am developing on my local MacBook Pro, which is somewhat resource constrained. My actual payload is a database, which is already running in a Docker container, but obviously needs some sort of persistent storage.
The individual containers also need to talk to each over network and some of them need a channel (port open) to the outside world.
I would like to set up a single Kubernetes cluster for dev and testing purposes that I can later easily deploy to to bare metal servers or a cloud vendor - Google and AWS.
From reading so far it looks that I can, for example use minikube and orchestrate that cluster on top VirtualBox that I am already running.
How would that then map to an actual deployment in the cloud?
What additional tools do I need to get it all running, especially with regards to persistent storage and network?
Will it map easily to the cloud?
What configuration management software would you recommend to maintain all that configuration?
A very short answer is that it's hard to do this properly.
One of the best options I know of is LinuxKit, it allows you to build identical images that you can run on any of the popular cloud providers or in a data centre of your own, or desktop hypervisor. In fact, this is what Docker for Mac is based on.
Disclaimer: I am one of the LinuxKit contributors.
Generally you get more or less the same kubernetes, regardless of the method you spin up the cluster. Although, comparing to cloud, other deployments will usually lack in what cloud provides by default with kubes built-in cloud providers. Some very important features it relates to are things like out of the box support for LoadBalancer type of services or automatic PersistentVolume provisioning.
If you're ok with not having them, or configuring them additionally for your dev/test env then you should be quite fine.
In scope of PVC/PV, the lack of automatic PV provisioner (unless you set up something like ie. GlusterFS with Heketi to support this) will mean that you will have to provision every PV manualy on the dev/test cluster in opposite to ability of this happening in automatic fashion on cloud.
Also, as you begin, there are ought to be some minor differences between your dev/test setup and prod, so you might really want to investigate manifest templating and management solutions like helm from thew day one of your work with deployments to kubernetes. I know it would save m a lot of headache if I did that my self when I started doing kube.
Focusing a bit on your inquiry on the database, I think you have two options (assuming cloud is still an option for you):
use a docker database image and mount volumes
use an RDS instance in case of aws
I believe that in case of databases the case of volumes is generally not recommended.
What I would suggest you do is (once you grasp a bit the basic concepts, mainly Services, to
create an RDS instance and your needed databases therein
expose this RDS instance as a Service as type ExternalName
I have been doing the following and so far is working:
apiVersion: v1
kind: Service
metadata:
name: my-database-service
namespace: some-namespece
spec:
type: ExternalName
externalName: <my-rds-endpoint>
After that, you rest of k8s services can reach this service via my-database-service
I think this approach is more db-wise consistent and saves the volumes' hussle.
That being said, I acknowledge that the guidelines in terms of "select-this-if-you-go-for-cloud" or "that-if-you-go-on-prem" are not quite clear yet.
My experience so far indicates that:
most likely for on prem (not just your localhost) the way to go is kubeadm
for aws I have been having a pleasant experience with kops so far.
there is also the Canonical solution that seems to use a stack (conjure-up/juju) to help deploy their own slightly modified version of Kubernetes that they claim suits both cloud/on-prem (haven't tried it at all).
I have a basic django/postgres app running locally, based on the Docker Django docs. It uses docker compose to run the containers locally.
I'd like to run this app on Amazon Web Services (AWS), and to deploy it using the command line, not the AWS console.
My Attempt
When I tried this, I ended up with:
this yml config for ecs-cli
these notes on how I deployed from the command line.
Note: I was trying to fire up the Python dev server in a janky way, hoping that would work before I added nginx. The cluster (RDS+server) would come up, but then the instances would die right away.
Issues I Then Failed to Solve
I realized over the course of this:
the setup needs another container for a web server (nginx) to run on AWS (like this blog post, but the tutorial uses the AWS Console, which I wanted to avoid)
ecs-cli uses a different syntax for yml/json config than docker-compose, so you need to have some separate/similar code from your local docker.yml (and I'm not sure if my file above was correct)
Question
So, what ecs-cli commands and config do I use to deploy the app, or am I going about things all wrong?
Feel free to say I'm doing it all wrong. I could also use Elastic Beanstalk - the tutorials on this don't seem to use docker/docker-compose, but seem easier overall (at least well documented).
I'd like to understand why any given approach is a good way to do this.
One alternative you may wish to consider in lieu of ECS, if you just want to get it up in the amazon cloud, is to make use of docker-machine using the amazonec2 driver.
When executing the docker-compose, just ensure the remote Amazon host machine is ACTIVE which can be viewed with a docker-machine ls
One item you will have to revisit with the Amazon Mmgt Console is to open the applicable PORTS such as Port 80 and any other ports exposed in the compose file. Once the security group is in place for the VPC, you should be able to simply refer to the VPC ID on subsequent executions bypassing any need to use the Mgmt console to add the ports. You may wish to bump up the instance size from the default t2.micro to match the t2.medium specified in your NOTES.
If ECS orchestration is needed, then a task definition will need to be created containing the container definitions you require as defined in your docker compose file. My recommendation would be to take advantage of the Mgmt console to construct the definition and then grab the accompanying JSON defintion which is made available and store in your source code repository for future executions on the command line where they can be referenced in registering task definitions, executing tasks and services within a given cluster.