The request is invalid: patch: Invalid value:... cannot convert int64 to string and Error from server (BadRequest): json: cannot unmarshal string - amazon-web-services

I'm sure regarding YAML format and kubernetes (AWS EKS) as per validations of Kubeval & Yamllint.
The following is a aws-auth-patch.yml file.
However... When I executed in CMD kubectl patch configmap/aws-auth -n kube-system --patch "$(cat aws-auth-patch.yml)"
error: Error from server (BadRequest): json: cannot unmarshal string into Go value of type map[string]interface {}
Also in Windows PowerShell kubectl patch configmap/aws-auth -n kube-system --patch $(Get-Content aws-auth-patch.yml -Raw)
error: The request is invalid: patch: Invalid value: "map[apiVersion:v1 data:map[....etc...": cannot convert int64 to string
I think that YAML file format is normal.
What is causing this error?

I've solved it by change my OS from Windows 10 to WSL (Windows Sub-system for Linux ) (ubuntu 20.04 LTS) and now the below command executed successfully.
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat aws-auth-patch.yml)"
and result is:
configmap/aws-auth patched

Related

Helm hook for post-install, post-upgrade using busybox wget is failing

I am trying to deploy a Helm hook post-install, post-upgrade hook which will create a simple pod with busybox and perform a wget on an app's application port to insure the app is reachable.
I can not get the hook to pass, even though I know the sample app is up and available.
Here is the manifest:
apiVersion: v1
kind: Pod
metadata:
name: post-install-test
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
containers:
- name: wget
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c"]
args: ["sleep 15; wget {{ include "sampleapp.fullname" . }}:{{ .Values.service.applicationPort.port }}"]
restartPolicy: Never
As you can see in the manifest in the args, the name of the container is in Helm's template syntax. A developer will input the desired name of their app in a Jenkins pipeline, so I can't hardcode it.
I see from kubectl logs -n namespace post-install-test, this result:
Connecting to sample-app:8080 (172.20.87.74:8080)
wget: server returned error: HTTP/1.1 404 Not Found
But when I check the EKS resources I see the pod running the sample app that I'm trying to test with the added suffix of what I've determined is the pod-template-hash.
sample-app-7fcbd52srj9
Is this suffix making my Helm hook fail? Is there a way I can account for this template hash?
I've tried different syntaxes on the command, but I can confirm with the kubectl logs the helm hook is attempting to connect but keeps getting a 404.

kubectl wait - error: no matching resources found

I am installing metallb, but need to wait for resources to be created.
kubectl wait --for=condition=ready --timeout=60s -n metallb-system --all pods
But I get:
error: no matching resources found
If I dont wait I get:
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.106.91.126:443: connect: connection refused
Do you know how to wait for resources to be created before actually be able to wait for condition.
Info:
kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:29:58Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/arm64"}
For the error “no matching resources found”:
Wait for a minute and try again, It will be resolved.
You can find the explanation about that error in the following link Setting up Config Connector
For the error STDIN:
Follow the steps mentioned below:
You are getting this error because API server is NOT able to connect to the webhook
1)Check your Firewall Rules allowing TCP port 443 or not.
2)Temporarily disable the operator
kubectl -n config-management-system scale deployment config-management-operator --replicas=0
deployment.apps/config-management-operator scaled
Delete the deployment
kubectl delete deployments.apps -n <namespace> -system <namespace>-controller-manager
deployment.apps "namespace-controller-manager" deleted
3)create a configmap in the default namespace
kubectl create configmap foo
configmap/foo created
4)Check that configmap does not work with the label on the object
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
labels:
configmanagement.gke.io/debug-force-validation-webhook: "true"
name: foo
EOF
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "debug-validation.namespace.sh": failed to call webhook: Post "https://namespace-webhook-service.namespace-system.svc:443/v1/admit?timeout=3s": no endpoints available for service "namespace-webhook-service"
5)And finally do clean up by using the below commands :
kubectl delete configmap foo
configmap "foo" deleted
kubectl -n config-management-system scale deployment config-management-operator --replicas=1
deployment.apps/config-management-operator scaled

Internal error occurred: failed calling webhook "v1.vseldondeployment.kb.io" while deploying Seldon yaml file on minikube

I am trying to follow the instruction on Seldon to build and deploy the iris model on minikube.
https://docs.seldon.io/projects/seldon-core/en/latest/workflow/github-readme.html#getting-started
I am able to install Seldon with Helm and Knative using YAML file. But while I am trying to apply this YAML file to deploy the Iris model, I am having the following error:
Internal error occurred: failed calling webhook "v1.vseldondeployment.kb.io": Post "https://seldon-webhook-service.seldon-system.svc:443/validate-machinelearning-seldon-io-v1-seldondeployment?timeout=30s": dial tcp 10.107.97.236:443: connect: connection refused
I used kubectl apply YAML on other files such as knative and broker installation they don't have this problem, but when I kubectl apply any SeldonDeployment YAML file this error comes up, I also tried the cifar10.yaml for cifar10 model deploy and mnist-model.yaml for mnist model deploy they have the same problem.
Has anyone experienced similar kind of problem and what are the best ways to troubleshoot and solve the problem?
My Seldon is 1.8.0-dev, minikube is v1.19.0 and kubectl Server is v1.20.2
Here is the YAML file:
kind: SeldonDeployment
metadata:
name: iris-model
namespace: seldon
spec:
name: iris
predictors:
- graph:
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/sklearn/iris
name: classifier
name: default
replicas: 1
Error Code
Make sure that the Seldon core manager in seldon-system is running ok: kubectl get pods -n seldon-system.
In my case, the pod was in CrashLoopBackOff status and was constantly restarting.
Turns out the problem had been while installing the seldon. Instead of having
helm install seldon-core seldon-core-operator \
— repo https://storage.googleapis.com/seldon-charts \
— set usageMetrics.enabled=true \
— set istio.enabled=true \
— namespace seldon-system
try once:
helm install seldon-core seldon-core-operator \
--repo https://storage.googleapis.com/seldon-charts \
--set usageMetrics.enabled=true \
--namespace seldon-system \
--set ambassador.enabled=true
Reference
P. S.
When reinstalling you can just delete all the namespaces (which shouldn't be a problem since ur just doing a tutorial) with kubectl delete --all namespaces.

Exporting WSO2 API

while exporting API, getting below error. Please suggest.
G:\WSO2\apimcli>apimcli export-api -n PizzaShackAPI -v 1.0.0 -r admin -e dev -k
apimcli: Error while exporting Reason: Get https://localhost:9443/carbon/admin/login.jsp: Auto redirect is disabled
Exit status 1
G:\WSO2\apimcli>apimcli export-api -n PizzaShackAPI -v 1.0.0 -r admin -e dev
apimcli: Error while exporting Reason: Get https://localhost:9443/api-import-export-2.6.0-v0/export-api?name=PizzaShackAPI&preserveStatus=true&provider=admin&version=1.0.0: x509: certificate signed by unknown authority
Exit status 1
Make sure you deployed the same version of api-import-export war which you configured in the add environment command[1].
apimcli add-env -n production \
--registration https://localhost:9443/client-registration/v0.14/register \
--apim https://localhost:9443 \
--token https://localhost:8243/token \
--import-export https://localhost:9443/api-import-export-2.6.0-v10 \
--admin https://localhost:9443/api/am/admin/v0.14 \
--api_list https://localhost:9443/api/am/publisher/v0.14/apis \
--app_list https://localhost:9443/api/am/store/v0.14/applications
In above case, it's api-import-export-2.6.0-v10.
[1] https://docs.wso2.com/display/AM260/Migrating+the+APIs+and+Applications+to+a+Different+Environment#Example-AddEnv
You should create Self Signed Certificates and add them .jks file in G:\WSO2\wso2am-2.6.0\repository\resources\security\client-truststore.jks. It worked..
This how to Create Self Signed Certificates: http://niranjankaru.blogspot.com/2016/01/create-your-own-ssl-certificate-for.html
I have sorted out the issue in my case as version compatibility among apimcli, import/export war file and WSO2 API-M server.
Issue was occurred due to the version (api-import-export-2.6.0-v10) mentioned as compatible by WSO2 not working properly with our APIM server and tried lowering the version and worked properly now.
WSO2 API-M version: 2.6.0
Import/Export tool version: APIMCLI v2.0.1
[Zip file downloaded for apimcli is ready for use no additional config was needed in my case]
Import/Export WAR file version: api-import-export-2.5.0-v1
[war file has been hot deployed to below path wso2am/2.6.0/repository/deployment/server/webapps/]
Below Commands executed:
Exported an already created API from DEV environment:
*$ ./apimcli export-api -n ProfileManagementNJ -v v1.0.0 -r admin -e dev -k
Successfully exported API!
Find the exported API at /home/stwso2/.wso2apimcli/exported/apis/dev/ProfileManagementNJ_v1.0.0.zip*
Imported the above exported API to ST environment:
*$ ./apimcli import-api -k -f /home/stwso2/.wso2apimcli/exported/apis/dev/ProfileManagementNJ_v1.0.0.zip -e st --preserve-provider false
Successfully imported API*
Actual error message details can be found as in below and are captured from console log:
$ ./apimcli export-api -n ProfileManagementNJ -v 1.0.0 -r admin -e st -k --verbose
Executed ImportExportCLI (apimcli) on Wed, 30 Oct 2019 13:41:52 UTC
[INFO]: Insecure: true
[INFO]: export-api called
[INFO]: ExportAPI: URL: https://172.26.41.4:9443/api-import-export-2.6.0-v10/export-api?name=ProfileManagementNJ&version=1.0.0&provider=admin&preserveStatus=true
apimcli: Error while exporting Reason: Get https://172.26.41.4:9443/carbon/admin/login.jsp: Auto redirect is disabled
Exit status 1
source: https://docs.wso2.com/display/AM260/Migrating+the+APIs+to+a+Different+Environment#Example-exportAPI

istio-ingress can't start up

When I start minikube and apply istio.yaml
bug the ingress can't start up:
eumji#eumji:~$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ca-76dddbd695-bdwm9 1/1 Running 5 2d
istio-ingress-85fb769c4d-qtbcx 0/1 CrashLoopBackOff 67 2d
istio-mixer-587fd4bbdb-ldvhb 3/3 Running 15 2d
istio-pilot-7db8db896c-9znqj 2/2 Running 10 2d
When I try to see the log I get following output:
eumji#eumji:~$ kubectl logs -f istio-ingress-85fb769c4d-qtbcx -n istio-system
ERROR: logging before flag.Parse: I1214 05:04:26.193386 1 main.go:68] Version root#24c944bda24b-0.3.0-24ec6a3ac3a1d592d1873d2d8198278a849b8301
ERROR: logging before flag.Parse: I1214 05:04:26.193463 1 main.go:109] Proxy role: proxy.Node{Type:"ingress", IPAddress:"", ID:"istio-ingress-85fb769c4d-qtbcx.istio-system", Domain:"istio-system.svc.cluster.local"}
ERROR: logging before flag.Parse: I1214 05:04:26.193480 1 resolve.go:35] Attempting to lookup address: istio-mixer
ERROR: logging before flag.Parse: I1214 05:04:41.195879 1 resolve.go:42] Finished lookup of address: istio-mixer
Error: lookup failed for udp address: i/o timeout
Usage:
agent proxy [flags]
--serviceregistry string Select the platform for service registry, options are {Kubernetes, Consul, Eureka} (default "Kubernetes")
--statsdUdpAddress string IP Address and Port of a statsd UDP listener (e.g. 10.75.241.127:9125)
--zipkinAddress string Address of the Zipkin service (e.g. zipkin:9411)
Global Flags:
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
-v, --v Level log level for V logs (default 0)
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
ERROR: logging before flag.Parse: E1214 05:04:41.198640 1 main.go:267] lookup failed for udp address: i/o timeout
What could be the reason?
There is not enough information in your post to figure out what may be wrong, in particular it seems that somehow your ingress isn't able to resolve istio-mixer which is unexpected.
Can you file a detailed issue
https://github.com/istio/issues/issues/new
And we can take it from there ?
Thanks
Are you using something like minikube? The quick-start docs give this hint: "Note: If your cluster is running in an environment that does not support an external load balancer (e.g., minikube), the EXTERNAL-IP of istio-ingress says . You must access the application using the service NodePort, or use port-forwarding instead."
https://istio.io/docs/setup/kubernetes/quick-start.html