installing istio and apps on a single namespace? we just have the namespace and do not administer the cluster - istio

Is it possible to install istio and my apps in a single namespace? We do not own the cluster and just have been given a namespace. We also dont have the privileges to run operators etc.

Related

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops into local kubernetes cluster (using kubeadm)?

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops (project link) into local Kubernetes cluster (setup using kubeadm)?
My thinking is that if the application runs in k8s cluster based on AWS EC2 instances, it should also run in local k8s cluster as well. I am trying it locally for testing purposes.
Heres what I have tried so far but it is not working.
First I set up my local 2-node cluster using kubeadm
Then I modified the installation script of the project (link given above) by removing all the references to EC2 (as I am using local machines) and kops (particularly in their create_cluster.py script) state.
I have modified their application yaml files (app requirements) to meet my localsetup (2-node)
Unfortunately, although most of the application pods are created and in running state, some other application pods are unable to create and therefore, I am not being able to run the whole application on my local cluster.
I appreciate your help.
It is the beauty of Docker and Kubernetes. It helps to keep your development environment to match production. For simple applications, written without custom resources, you can deploy the same workload to any cluster running on any cloud provider.
However, the ability to deploy the same workload to different clusters depends on some factors, like,
How you manage authorization and authentication in your cluster? for example, IAM, IRSA..
Are you using any cloud native custom resources - ex, AWS ALBs used as LoadBalancer Services
Are you using any cloud native storage - ex, your pods rely on EFS/EBS volumes
Is your application cloud agonistic - ex using native technologies like Neptune
Can you mock cloud technologies in your local - ex. Using local stack to mock Kinesis, Dynamo
How you resolve DNS routes - ex, Say you are using RDS n AWS. You can access it using a route53 entry. In local you might be running a mysql instance and you need a DNS mechanism to discover that instance.
I did a google search and looked at the documentation of kOps. I could not find any info about how to deploy to local, and it only supports public cloud providers.
IMO, you need to figure out a way to set up your local EKS cluster, and if there are any usage of cloud native technologies, you need to figure out an alternative way about doing the same in your local.
The true answer, as Rajan Panneer Selvam said in his response, is that it depends, but I'd like to expand somewhat on his answer by saying that your application should run on any K8S cluster given that it provides the services that the application consumes. What you're doing is considered good practice to ensure that your application is portable, which is always a factor in non-trivial applications where simply upgrading a downstream service could be considered a change of environment/platform requiring portability (platform-independence).
To help you achieve this, you should be developing a 12-Factor Application (12-FA) or one of its more up-to-date derivatives (12-FA is getting a little dated now and many variations have been suggested, but mostly they're all good).
For example, if your application uses a database then it should use DB independent SQL or no-sql so that you can switch it out. In production, you may run on Oracle, but in your local environment you may use MySQL: your application should not care. The credentials and connection string should be passed to the application via the usual K8S techniques of secrets and config-maps to help you achieve this. And all logging should be sent to stdout (and stderr) so that you can use a log-shipping agent to send the logs somewhere more useful than a local filesystem.
If you run your app locally then you have to provide a surrogate for every 'platform' service that is provided in production, and this may mean switching out major components of what you consider to be your application but this is ok, it is meant to happen. You provide a platform that provides services to your application-layer. Switching from EC2 to local may mean reconfiguring the ingress controller to work without the ELB, or it may mean configuring kubernetes secrets to use local-storage for dev creds rather than AWS KMS. It may mean reconfiguring your persistent volume classes to use local storage rather than EBS. All of this is expected and right.
What you should not have to do is start editing microservices to work in the new environment. If you find yourself doing that then the application has made a factoring and layering error. Platform services should be provided to a set of microservices that use them, the microservices should not be aware of the implementation details of these services.
Of course, it is possible that you have some non-portable code in your system, for example, you may be using some Oracle-specific PL/SQL that can't be run elsewhere. This code should be extracted to config files and equivalents provided for each database you wish to run on. This isn't always possible, in which case you should abstract as much as possible into isolated services and you'll have to reimplement only those services on each new platform, which could still be time-consuming, but ultimately worth the effort for most non-trival systems.

Argo with multiple GCP projects

I've been looking into Argo as a Gitops style CD system. It looks really neat. That said, I am not understanding how to use Argo in across multiple GCP projects. Specifically, the plan is to have environment dependent projects (i.e. prod, stage dev). It seems like Argo is not designed to orchestrate deployment across environment dependent clusters, or is it?
Your question is mainly about security management. You have several possibilities and several point of views/level of security.
1. Project segregation
The most simple and secure way is to have Argo running in each project without relation/bridge between each environment. No risk in security or to deploy on the wrong project. Default project segregation (VPC and IAM role) are sufficient.
But it implies to deploy and maintain the same app on several clusters, and to pay several clusters (Dev, Staging and prod CD aren't used at the same frequency)
In term of security, you can use the Compute Engine default service account for the authorization, or you can rely on Workload identity (preferred way)
2. Namespace segregation
The other way is to have only one project with a cluster deployed on it and a kubernetes namespace per delivery project. By the way, you can reuse the same cluster for all the projects in your company.
You still have to update and maintain Argo in each namespace, but the cluster administration is easier because the node are the same.
In term of security, you can use the Workload identity per namespace
(and thus to have 1 service account per namespace authorized in the delivery project) and to keep the permission segregated
Here, the trade off is the private IP access. If your deployment need to access to private IP inside the delivery project (for testing purpose or to access to private K8S master), you have to set up a VPC peering (and you are limited to 25 peering per project) or set up a shared VPC.
3. Service account segregation
The latest solution isn't recommended, but it's the easiest to maintain. You have only one GKE cluster for all the environment, and only 1 namespace with Argo deployed on it. By configuration, you can say to Argo to use a specific service account to access to the delivery project (with service account key files (not recommended solution) stored in GKE secrets or in secret manager, or (better) by using service account impersonation).
Here also, you have 1 service account authorized per delivery project. And the peering issue is the same in case of private IP access required in the delivery project.

anyway to tell which cloud provider current k8s cluster is running at?

I'm writing a k8s operator, with the knowledge of current cloud provider the k8s is currently running on, I can do some platform-specific tasks for users, such as prepare some default storage classes for users.
but how can an operator running in the k8s cluster know it is GCP or AWS?
After scanning through the APIs, the cloud provider leaves some clues here and there, for example, for the GKE cluster I am running now, it has an API named: /apis/nodemanagement.gke.io/v1alpha1
but I think it's a little bit too hack, and wonder if there is any more formal way to get this info.
No, this is not exposed in a consistent way. You should have the use put it in their config file or whatever.
Indeed, it's not consistent. When the configuration is added by default to kubectl, you have these patterns:
> kubectl config current-context
# For GCP
> gke_gbl-imt-homerider-basguillaueb_europe-west1-b_my-first-cluster-1
# For AWS
> arn:aws:eks:eu-west-1:306974639454:cluster/demo-knative
You can also rename the config is you prefer your own pattern.

Istio-pilot Consul Support

It's been a little unclear to me what requirements Istio-pilot using Consul adapter are. I am trying to setup and have istio-pilot Discovery to act as pure Envoy xDS. However, in one of the examples where Consul is used (from Istio src), it does install one kube-apiserver (and etcd for that matter). I would like to use Envoy as the data-plane (or istio-pilot agent for that matter), but leverage Consul for service discovery, and not integrate with Kubernetes. Does istio-pilot require K8 anyway for that use case?
Istio supports several different so called ServiceDiscovery implementations.
Kubernetes is one of them which discovers Services from Kubernetes Services.
But this is really just one of the possible ways to run Istio Pilot and you can use other ServiceDiscovery mechanisms line Consul via the command line argument --registries Consul.
See https://archive.istio.io/v1.4/docs/reference/commands/pilot-discovery/ for a detailed description of the command line arguments.
Once you run Pilot with that configuration it should load Services exclusively from Consul. These should be pushed to the data plane under the usual name <service name>.service.consul.
UPDATE:
From your comment below it seems that you not only want to not load Services from Kubernetes, but in general completely run without it.
While this indeed doesn't seem to be possible with 1.4 – i.e. watching Istio resources is always started – it seems to work with 1.5.
To achieve that you to start pilot with --disable-install-crds and --configDir
<config path> where <config path> points to a directory containing the yamls for the Istio specific resources that you might still need, like Sidecars, MeshPolicy, EnvoyFilter etc.
If --configDir is not defined Pilot will still try to get these resources from Kubernetes, so it is essential to add this argument even if the directory is empty.
Finally you should make sure that the MeshConfig that you pass to pilot via --meshConfig meshconfig.yaml does not point to a URL of galley by commenting this out, in case you copied an existing file /etc/istio/config/mesh from a running instance of Pilot:
configSources:
#- address: istio-galley.istio-system.svc:9901
# tlsSettings:
# mode: ISTIO_MUTUAL

Link gcloud with existing components

After installing gcloud, running gcloud components list list all installed component.
Is there a way to associate kubectl (already install on the system using OS package manager) to this list?
gcloud handles 4 major versions of kubectl. This is convenient when you need to switch from one version to another (for testing purpose as long as kubectl versions are backward compatible). You can see it like a kind of SDKMan, or NVM. B
My OS package manager is installing kubectl even when I don't ask it as it's a dependency for kubeadm. So if I want to have kubeadm and gcloud handling several versions of my kubectl I have just a conflict (resolved by path precedence so kubectl from ~/google-cloud-sdk/bin will never be used).
Cheers,
Olivier
It's likely easiest to uninstall kubectl from your OS and then gcloud components install kubectl but what benefit do you seek from the association?
Apart from having gcloud report kubectl from gcloud components list and update it with gcloud components update, the only linkage (to my knowledge) is after gcloud container clusters get-credentials ... which, depends on gcloud to support authentication against Kubernetes Engine clusters. But, you may get this without using gcloud bundling of kubectl.
Otherwise, if your OS package manager is managing kubectl for you, I'd be inclined to not break it.