Gurantee context for kubectl command - "kubectl get po use-context bla"? - kubectl

I am writing a script which involves kubectl delete , it is of course existential to run in against correct context-cluster.
The problem is that from what I observe, if you open two terminals and do:
kubectl config use-context bla
in one window, then the other one will switch as well. Therefore, concern if something switches context during script execution my delete operation will start deleting resources in the wrong cluster.
I understand that I could use labels on my pods or different namespaces, but in my case namespaces are the same and there are no labels.
So is there a way to specify for each command individually which context it should execute against? Something like:
kubectl get po use-context bla

Use the --context flag:
kubectl get po --context bla
If you run any kubectl command, you'll also see it says you can run kubectl options to see a list of global options that can be applied to any command. --context is one such global option.

Related

Delete attempt of Kubernetes resource reports not found, even though it can be listed with "kubectl get"

I am running Kubeflow pipeline on a single node Rancher K3S cluster. Katib is deployed to create training jobs (Kind: TFJob) along with experiments (a CRD).
I can list the experiment resources with kubectl get experiments -n <namespace>. However, when trying to delete using kubectl delete experiment exp_name -n namespace the API server returns NotFound.
kubectl version is 1.22.12
kubeflow 1.6
How can a(any) resource be deleted when it is listed by "kubectl get , but a direct kubectl delete says the resource cannot be found?
Hopefully there is a general answer applicable for any resource.
Example:
kc get experiments -n <namespace>
NAME TYPE STATUS AGE
mnist-e2e Running True 21h
kc delete experiment mnist-e2e -n namespace
Error from server (NotFound): experiments.kubeflow.org "mnist-e2e" not found
I have tried these methods, but all involve the use of the resource name (mnist-e2e) and result in "NotFound".
I tried patching the manifest to empty the finalizers list:
kubectl patch experiment mnist-e2e \
-n namespace \
-p '{"metadata":{"finalizers":[]}}' \
--type=merge
I tried dumping a manifest of the "orphaned" resource and then deleting using that manifest:
kubectl get experiment mnist-e2e -n namespace -o yaml > exp.yaml
kubectl delete -f exp.yaml
Delete attempts from the Kubeflow UI Experiments (AutoML) page fail.
Thanks

Kubectl create job from cronjob and override args

Kubectl allows you to create ad hoc jobs based on existing crons.
This works great but in the documentation there is no specification for passing arguments upon creation of the job.
Example:
kubectl -n my-namespace create job --from=cronjob/myjob my-job-clone
Is there any way I can pass arguements to this job upon creation?
Although kubectl currently does not allow you to use the --from flag and specify a command in the same clause, you can work around this limitation by getting the yaml from a dry run and using yq to apply a patch to it.
For example:
# get the original yaml file
kubectl create job myjob --from cronjob/mycronjob --dry-run=client --output yaml > original.yaml
# generate a patch with your new arguments
yq new 'spec.template.spec.containers[0].args[+]' '{INSERT NEW ARGS HERE}' > patch.yaml
# apply the patch
yq merge --arrays update patch.yaml original.yaml > final.yaml
# create job from the final yaml
kubectl create -f final.yaml
Ok turns out that kubectl does not allow you to use the --from and specify a command in the same clause.
You will get the following error cannot specify --from and command.
For example:
kubectl create job --from=cronjob/my-job.yaml my-job-test -- node run.js --date '2021-04-04'
error: cannot specify --from and command
So in short you cannot use your existing cron template and specify a command.
Closest thing you can get is use the --image flag and manually pass in the image that your file needs, then specify the command and args after.
kubectl create job --image=<YOUR IMAGE NAME> my-job-test -- node run.js --date '2021-04-04'
job.batch/my-job-test created

kubectl how to rename a context

I have many contexts, one for staging, one for production, and many for dev clusters. Copy and pasting the default cluster names is tedious and hard, especially over time. How can I rename them to make context switching easier?
Renaming contexts is easy!
$ kubectl config rename-context old-name new-name
Confirm the change by
$ kubectl config get-contexts
If you are using kubectx try
kubectx new-context-name=old-context-name

Kubectl : No resource found

I’ve installed ICP4Data successfully. I am pretty green in respect to ICP4Data and Kubernetes. I’m trying to use kubectl command for listing the pods in ICP4D but “kubectl get pods” returns “No resource found”. Am I missing something?
icp4d uses 'zen' namespaces to logically separate its assets and resources from the core native icp/kube platform. In the default installation of ICP4D, there are no pods deployed on 'default' namespace and hence you get "no resources found" cause if you don't provide the namespace while trying to get pods, kubectl assumes its default namespace.
To List the pods from zen namespace
kubectl get pods -n zen
To list all the namespaces available to you - try
kubectl get namespaces
To list pods from all the namespaces, you might want to append --all-namespaces
kubectl get pods --all-namespaces
This should list all the pods from zen, kubesystem and possibly others.
Please try adding namespace to the command as well. In the case for ICP4D try kubectl get pods -n zen.
On the other hand, you could switch your namespace to zen at the beginning by
kubectl config set-context --current --namespace=zen
Then you will be able to see all the information by running without the -n argument
kubectl get pods
Check you are currently on which namespace.
To find out your pod is created in which namespace, you can run this command
kubectl get pods --all-namespaces
Also just to add, since I was in default workspace and I wanted to get logs of a pod in another namespace, just doing
kubectl get logs -f <pod_name>
was giving output "Error from server (NotFound): pods "pod_name" not found".
So I specified the namespace as well.
kubectl logs -f <pod_name> -n namespace

Cannot assign instance name to concurrent workflow in Informatica

In Informatica, I can start a workflow but cannot get it to recognize my instance name in the session log and Workflow Monitor.
The workflow starts but in the session log it displays this:
Workflow wf_Tenter image description hereemp started with run id [22350], run instance name [], run type [Concurrent Run with Un[enter image description here][1]ique Instance Name]
Instance name is blank.
My command is:
pmcmd startworkflow -sv <service> -d <domain> -u <user> -p <password> -f <folder> -rin INST1 -paramfile <full param file path name> wf_Temp
I have edited the workflow and selected the checkbox Configure Current Execution. Inside Configure Concurrent Execution button, I have created three instances: INST1, INST2, INST3, but without any associated parameter files. All parameter files are blank.
I understand, I think, that in order to start a workflow with PMCMD I must pass in one of the configured instance names (i.e. INST1, INST2, INST3, etc.)
If I execute the PMCMD command from Putty a second time to see the second instance run, I receive a message that workflow is still running and I have to wait? Why? I have checked the Concurrent Workflow box in the workflow.
ERROR: Workflow [wf_Temp]: Could not start execution of this workflow because the current run on this Integration Service has not completed yet.
Disconnecting from Integration Service
So, I think I'm close, but am missing something. The workflow runs with the parameter file I pass in PMCMD but the instance name seems to be ignored.
Further. Do, I have to pre-configure instance names in the Workflow manager? Is the PMCMD instance and parameter file parameters enough? It doesn't seem quite so dynamic if Instances have to be pre-defined in the workflows.
Thanks.
#MacieJG
Here's the screenshots from Putty when I run the command. You can see the instance name DALLAS is being passed through the PMCMD OK. No combination ever gets the Instance name. I did not include the pics of your suggested Test 1, but results were same.. still no instance.
Here's my complete test as requested in a comment above. I tried my best to put everything you may need here, but if I missed anything, just let me know. So here goes...
I've created a very simple workflow to run with instance name. It uses a timer to wait and a command tast to write the instance name to a file:
The concurrent execution has been set up in the most simple way:
Now, I've prepared the followig batch to run the workflow (just user & password removed):
SET "PMCMD=C:\Informatica\9.5.1\clients\PowerCenterClient\CommandLineUtilities\PC\server\bin\pmcmd"
%PMCMD% startworkflow -sv Dev_IS -d Domain_vic-vpc -u ####### -p ####### -f Dev01 -rin GLASGOW wf_Instance_Test
%PMCMD% startworkflow -sv Dev_IS -d Domain_vic-vpc -u ####### -p ####### -f Dev01 -rin FRANKFURT wf_Instance_Test
%PMCMD% startworkflow -sv Dev_IS -d Domain_vic-vpc -u ####### -p ####### -f Dev01 -rin GLASGOW wf_Instance_Test
It runs three instances, two of them with same name, just to test it. I run the batch the following way to capture the output:
pmStartTestWF.bat > c:\MG\pmStartTestWF.log
Once I execute it, here what I see in workflow monitor:
Just as expected, three instances executed and properly displayed. File output looks fine as well:
The output of pmcmd can be found here. Full definition of my test workflow is available here.
I really hope this will help you somehow. Feel free to let me know if you'd find anything missing here. Good luck!
You don't need to pre-configure instance names in workflow. Passing the instance name in pmcmd along with parameter filename is enough.
try this: pmcmd startworkflow -sv (service) -d (domain) -u (user) -p (password) -f (folder) -paramfile (full param file path name) -rin INST1 wf_Temp
To be precise: when you configure Concurrent Execution, you can specify if you:
allow concurrent run with same instance name
allow concurrent run only with unique instance name
In addition to that you may, but don't have to, indicate which instance should use which parameter file, so it won't be need to mention it while executing. But that's a separate feature.
Now, if you've chosen the first one, you will be able to invoke the WF multiple times with the very same command. If you've chosen the second one and try this, you will get the 'WF is already running' error.
The trouble is that your example seems correct at first glance. As per the log message:
Workflow wf_Temp started with run id [22350], run instance name [], run type [Concurrent Run with Unique Instance Name]
So you're allowing unique instances only. It seems that the instance name has not been used. First execution does not set the instance name, so similar second execution won't use it either and will get rejected as this is the same instance name (i.e. None).
You may try to change the setting to Allow concurrent run with same instance name, this shall allow the secon execution, but does not solve the main issue. For some reason the instance name does not get passed.
Please verify your command against the docs referenced below. Try to match the order perhaps. Please share some more info if it still fails.
Looking at the docs:
pmcmd StartWorkflow
<<-service|-sv> service [<-domain|-d> domain] [<-timeout|-t> timeout]>
<<-user|-u> username|<-uservar|-uv> userEnvVar>
<<-password|-p> password|<-passwordvar|-pv> passwordEnvVar>
[<<-usersecuritydomain|-usd> usersecuritydomain|<-usersecuritydomainvar|-usdv>
userSecuritydomainEnvVar>]
[<-folder|-f> folder]
[<-startfrom> taskInstancePath]
[<-recovery|-norecovery>]
[<-paramfile> paramfile]
[<-localparamfile|-lpf> localparamfile]
[<-osprofile|-o> OSUser]
[-wait|-nowait]
[<-runinsname|-rin> runInsName]
workflow