What options do I have for installing istio, either without istioctl, or with istioctl installed on an admin node outside cluster - istio

I dont want to install istioctl in on nodes in my k8s cluster.
I'd strongly prefer to use it to create yaml that I can then install with kubectl.
Alternatively I'd like to use it on an administrative node separate from the cluster itself.
Most of the documentation is focused on the use of istioctl, and though it doesnt state this explicitly, I'm pretty sure it assumes you install istioctl on one of the master nodes.
Though hopefully my assumption is wrong.
Reading this link it seems that maybe I can install istioctl on my admin node, and it will install everything on the cluster referenced in my .kube folder (just as kubectl does)
So my two questions are:
Can I use istioctl installed on an admin node outside of my
kubernetes cluster (i use k3s) and then have it install stuff into
my cluster. (I administer my cluster from an admin node separate
from the cluster
Is there good documentation for doing the complete
install using kubectl, and only using istioctl to generate yaml
A separate question I have relates to the ISTIO operator that I saw. I believe that the ISTIO team has deprecated their operator. Did I misinterpret what I read? I've been in the habit of looking for operators to install the components of my system. They seem to be the way the K8S standard is driving things.
Possible answer to Question 1 above.
In follow up. I've gone ahead and installed istioctl on an admin node outside my cluster. So far it would seem that it is able to communicate with my cluster fine. Though I've not yet performed an install with istioctl, it would seem that the following command says "good to go"
/usr/local/istio/istio/bin/istioctl
(base) [dsargrad#localhost ~]$ istioctl x precheck
✔ No issues found when checking the cluster. Istio is safe to install or upgrade!
To get started, check out https://istio.io/latest/docs/setup/getting-started/

I'm pretty sure it assumes you install istioctl on one of the master nodes. Though hopefully my assumption is wrong.
You're in luck, this assumption is wrong. You do not need to install istioctl on the Kubernetes control plane nodes.
Can I use istioctl installed on an admin node outside of my kubernetes cluster
Yes. It is meant to be used like kubectl, either run manually or as part of a CD pipeline.
Is there good documentation for doing the complete install using kubectl, and only using istioctl to generate yaml
Good is subjective, but this section contains the steps you're looking for.
I believe that the ISTIO team has deprecated their operator. Did I misinterpret what I read?
Yes. It is not deprecated, however its use is "discouraged". The operator was developed back when the istio control plane was separated into different components (galley, pilot, mixer, and citadel) and managing them all was a lot more work. Much of the control plane was later consolidated into a single component called istiod which simplified things considerably.
You are still welcome to use the operator. It contains all the same logic that istioctl does, it just runs it for you in a control loop.

istioctl generates istio related manifests. Run
istioctl manifest generate —set profile=demo > istio-demo.yaml
Then you can take file into k8s environment where you run kubectl apply -f istio-demo.yaml which will install istio into cluster

In short, istioctl is able to install into a separate cluster seemingly quite cleanly:
(base) [dsargrad#localhost samples]$ istioctl install
This will install the Istio 1.14.1 default profile with ["Istio core" "Istiod" "Ingress gateways"] components into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete Making this installation the default for injection and validation.
Thank you for installing Istio 1.14. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/yEtCbt45FZ3VoDT5A
(base) [dsargrad#localhost samples]$ k get namespaces
NAME STATUS AGE
default Active 5d12h
kube-system Active 5d12h
kube-public Active 5d12h
kube-node-lease Active 5d12h
rook-ceph Active 5d7h
istio-system Active 35s
And the resources in the istio-system namespace, all seem quite healthy. Additionally the ingress-gateway seems to have been properly exposed (via LoadBalancer) to the public interface
(base) [dsargrad#localhost samples]$ k get all -n istio-system
NAME READY STATUS RESTARTS AGE
pod/istiod-859b487f84-xh4nz 1/1 Running 0 3m8s
pod/svclb-istio-ingressgateway-6z45f 3/3 Running 0 2m54s
pod/svclb-istio-ingressgateway-pt94b 3/3 Running 0 2m54s
pod/istio-ingressgateway-57598d7c8b-shgtw 1/1 Running 0 2m54s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/istiod ClusterIP 10.43.145.104 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 3m8s
service/istio-ingressgateway LoadBalancer 10.43.3.172 192.168.56.133,192.168.56.134 15021:31786/TCP,80:30752/TCP,443:31197/TCP 2m54s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/svclb-istio-ingressgateway 2 2 2 2 2 <none> 2m54s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/istiod 1/1 1 1 3m8s
deployment.apps/istio-ingressgateway 1/1 1 1 2m55s
NAME DESIRED CURRENT READY AGE
replicaset.apps/istiod-859b487f84 1 1 1 3m8s
replicaset.apps/istio-ingressgateway-57598d7c8b 1 1 1 2m55s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway 2%/80% 1 5 1 2m54s
horizontalpodautoscaler.autoscaling/istiod Deployment/istiod 0%/80% 1 5 1 3m8s
Lastly a verification of the installation seems healthy:
(base) [dsargrad#localhost kubernetes]$ istioctl verify-install
1 Istio control planes detected, checking --revision "default" only
components.pilot.k8s.replicaCount should not be set when values.pilot.autoscaleEnabled is true
✔ HorizontalPodAutoscaler: istio-ingressgateway.istio-system checked successfully
✔ Deployment: istio-ingressgateway.istio-system checked successfully
✔ PodDisruptionBudget: istio-ingressgateway.istio-system checked successfully
✔ Role: istio-ingressgateway-sds.istio-system checked successfully
✔ RoleBinding: istio-ingressgateway-sds.istio-system checked successfully
✔ Service: istio-ingressgateway.istio-system checked successfully
✔ ServiceAccount: istio-ingressgateway-service-account.istio-system checked successfully
✔ ClusterRole: istiod-istio-system.istio-system checked successfully
✔ ClusterRole: istio-reader-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istio-reader-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istiod-istio-system.istio-system checked successfully
✔ ServiceAccount: istio-reader-service-account.istio-system checked successfully
✔ Role: istiod-istio-system.istio-system checked successfully
✔ RoleBinding: istiod-istio-system.istio-system checked successfully
✔ ServiceAccount: istiod-service-account.istio-system checked successfully
✔ CustomResourceDefinition: wasmplugins.extensions.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: destinationrules.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: envoyfilters.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: gateways.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: proxyconfigs.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: serviceentries.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: sidecars.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: virtualservices.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: workloadentries.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: workloadgroups.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: authorizationpolicies.security.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: peerauthentications.security.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: requestauthentications.security.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: telemetries.telemetry.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: istiooperators.install.istio.io.istio-system checked successfully
✔ HorizontalPodAutoscaler: istiod.istio-system checked successfully
✔ ClusterRole: istiod-clusterrole-istio-system.istio-system checked successfully
✔ ClusterRole: istiod-gateway-controller-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istiod-clusterrole-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istiod-gateway-controller-istio-system.istio-system checked successfully
✔ ConfigMap: istio.istio-system checked successfully
✔ Deployment: istiod.istio-system checked successfully
✔ ConfigMap: istio-sidecar-injector.istio-system checked successfully
✔ MutatingWebhookConfiguration: istio-sidecar-injector.istio-system checked successfully
✔ PodDisruptionBudget: istiod.istio-system checked successfully
✔ ClusterRole: istio-reader-clusterrole-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istio-reader-clusterrole-istio-system.istio-system checked successfully
✔ Role: istiod.istio-system checked successfully
✔ RoleBinding: istiod.istio-system checked successfully
✔ Service: istiod.istio-system checked successfully
✔ ServiceAccount: istiod.istio-system checked successfully
✔ EnvoyFilter: stats-filter-1.11.istio-system checked successfully
✔ EnvoyFilter: tcp-stats-filter-1.11.istio-system checked successfully
✔ EnvoyFilter: stats-filter-1.12.istio-system checked successfully
✔ EnvoyFilter: tcp-stats-filter-1.12.istio-system checked successfully
✔ EnvoyFilter: stats-filter-1.13.istio-system checked successfully
✔ EnvoyFilter: tcp-stats-filter-1.13.istio-system checked successfully
✔ EnvoyFilter: stats-filter-1.14.istio-system checked successfully
✔ EnvoyFilter: tcp-stats-filter-1.14.istio-system checked successfully
✔ EnvoyFilter: stats-filter-1.15.istio-system checked successfully
✔ EnvoyFilter: tcp-stats-filter-1.15.istio-system checked successfully
✔ ValidatingWebhookConfiguration: istio-validator-istio-system.istio-system checked successfully
Checked 15 custom resource definitions
Checked 2 Istio Deployments
✔ Istio is installed and verified successfully

Related

How to check if the latest Cloud Run revision is ready to serve

I've been using Cloud Run for a while and the entire user experience is simply amazing!
Currently I'm using Cloud Build to deploy the container image, push the image to GCR, then create a new Cloud Run revision.
Now I want to call a script to purge caches from CDN after the latest revision is successfully deployed to Cloud Run, however $ gcloud run deploy command can't tell you if the traffic is started to pointing to the latest revision.
Is there any command or the event that I can subscribe to to make sure no traffic is pointing to the old revision, so that I can safely purge all caches?
#Dustin’s answer is correct, however "status" messages are an indirect result of Route configuration, as those things are updated separately (and you might see a few seconds of delay between them). The status message will still be able to tell you the Revision has been taken out of rotation if you don't mind this.
To answer this specific question (emphasis mine) using API objects directly:
Is there any command or the event that I can subscribe to to make sure no traffic is pointing to the old revision?
You need to look at Route objects on the API. This is a Knative API (it's available on Cloud Run) but it doesn't have a gcloud command: https://cloud.google.com/run/docs/reference/rest/v1/namespaces.routes
For example, assume you did 50%-50% traffic split on your Cloud Run service. When you do this, you’ll find your Service object (which you can see on Cloud Console → Cloud Run → YAML tab) has the following spec.traffic field:
spec:
traffic:
- revisionName: hello-00002-mob
percent: 50
- revisionName: hello-00001-vat
percent: 50
This is "desired configuration" but it actually might not reflect the status definitively. Changing this field will go and update Route object –which decides how the traffic is splitted.
To see the Route object under the covers (sadly I'll have to use curl here because no gcloud command for this:)
TOKEN="$(gcloud auth print-access-token)"
curl -vH "Authorization: Bearer $TOKEN" \
https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/GCP_PROJECT/routes/SERVICE_NAME
This command will show you the output:
"spec": {
"traffic": [
{
"revisionName": "hello-00002-mob",
"percent": 50
},
{
"revisionName": "hello-00001-vat",
"percent": 50
}
]
},
(which you might notice is identical with Service’s spec.traffic –because it's copied from there) that can tell you definitively which revisions are currently serving traffic for that particular Service.
You can use gcloud run revisions list to get a list of all revisions:
$ gcloud run revisions list --service helloworld
REVISION ACTIVE SERVICE DEPLOYED DEPLOYED BY
✔ helloworld-00009 yes helloworld 2019-08-17 02:09:01 UTC email#email.com
✔ helloworld-00008 helloworld 2019-08-17 01:59:38 UTC email#email.com
✔ helloworld-00007 helloworld 2019-08-13 22:58:18 UTC email#email.com
✔ helloworld-00006 helloworld 2019-08-13 22:51:18 UTC email#email.com
✔ helloworld-00005 helloworld 2019-08-13 22:46:14 UTC email#email.com
✔ helloworld-00004 helloworld 2019-08-13 22:41:44 UTC email#email.com
✔ helloworld-00003 helloworld 2019-08-13 22:39:16 UTC email#email.com
✔ helloworld-00002 helloworld 2019-08-13 22:36:06 UTC email#email.com
✔ helloworld-00001 helloworld 2019-08-13 22:30:03 UTC email#email.com
You can also use gcloud run revisions describe to get details about a specific revision, which will contain a status field. For example, an active revision:
$ gcloud run revisions describe helloworld-00009
...
status:
conditions:
- lastTransitionTime: '2019-08-17T02:09:07.871Z'
status: 'True'
type: Ready
- lastTransitionTime: '2019-08-17T02:09:14.027Z'
status: 'True'
type: Active
- lastTransitionTime: '2019-08-17T02:09:07.871Z'
status: 'True'
type: ContainerHealthy
- lastTransitionTime: '2019-08-17T02:09:05.483Z'
status: 'True'
type: ResourcesAvailable
And an inactive revision:
$ gcloud run revisions describe helloworld-00008
...
status:
conditions:
- lastTransitionTime: '2019-08-17T01:59:45.713Z'
status: 'True'
type: Ready
- lastTransitionTime: '2019-08-17T02:39:46.975Z'
message: Revision retired.
reason: Retired
status: 'False'
type: Active
- lastTransitionTime: '2019-08-17T01:59:45.713Z'
status: 'True'
type: ContainerHealthy
- lastTransitionTime: '2019-08-17T01:59:43.142Z'
status: 'True'
type: ResourcesAvailable
You'll specifically want to check the type: Active condition.
This is all available via the Cloud Run REST API as well: https://cloud.google.com/run/docs/reference/rest/v1/namespaces.revisions
By default, the traffic is routed to the latest revision. You can see this into the logs.
Deploying container to Cloud Run service [SERVICE_NAME] in project [YOUR_PROJECT] region [YOUR_REGION]
✓ Deploying... Done.
✓ Creating Revision...
✓ Routing traffic...
Done.
Service [SERVICE_NAME] revision [SERVICE_NAME-00012-yic] has been deployed and is serving 100 percent of traffic at https://SERVICE_NAME-vqg64v3fcq-uc.a.run.app
If you want to be sure, you can explicitly call the update traffic command
gcloud run services update-traffic --platform=managed --region=YOUR_REGION --to-latest YOUR_SERVICE

istio is failing to install in a Kubernetes cluster built via Kops in AWS

I can't get the demo profile to work with istioctl. It seems like istioctl is having trouble creating IngressGateway and the AddonComponents. I have tried doing the helm installation with similar issues. I did a fresh k8s cluster from kops and the same issue. Any help debugging this issue would be greatly appreciated.
I am following these instructions.
https://istio.io/docs/setup/getting-started/#download
I am running
istioctl manifest apply --set profile=demo --logtostderr
This is the output
2020-04-06T19:59:24.951136Z info Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for details.
- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Pilot...
✔ Finished applying manifest for component Pilot.
- Applying manifest for component IngressGateways...
- Applying manifest for component EgressGateways...
- Applying manifest for component AddonComponents...
✔ Finished applying manifest for component EgressGateways.
2020-04-06T20:00:11.501795Z error installer error running kubectl: exit status 1
✘ Finished applying manifest for component AddonComponents.
2020-04-06T20:00:40.418396Z error installer error running kubectl: exit status 1
✘ Finished applying manifest for component IngressGateways.
2020-04-06T20:00:40.421746Z info
Component AddonComponents - manifest apply returned the following errors:
2020-04-06T20:00:40.421823Z info Error: error running kubectl: exit status 1
2020-04-06T20:00:40.421884Z info Error detail:
Error from server (Timeout): error when creating "STDIN": Timeout: request did not complete within requested timeout 30s (repeated 1 times)
clusterrole.rbac.authorization.k8s.io/kiali unchanged
clusterrole.rbac.authorization.k8s.io/kiali-viewer unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-istio-system unchanged
clusterrolebinding.rbac.authorization.k8s.io/kiali unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-istio-system unchanged
serviceaccount/kiali-service-account unchanged
serviceaccount/prometheus unchanged
configmap/istio-grafana unchanged
configmap/istio-grafana-configuration-dashboards-citadel-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-galley-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-istio-mesh-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-istio-performance-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-istio-service-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-istio-workload-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-mixer-dashboard unchanged
configmap/istio-grafana-configuration-dashboards-pilot-dashboard unchanged
configmap/kiali configured
configmap/prometheus unchanged
secret/kiali unchanged
deployment.apps/grafana unchanged
deployment.apps/istio-tracing unchanged
deployment.apps/kiali unchanged
deployment.apps/prometheus unchanged
service/grafana unchanged
service/jaeger-agent unchanged
service/jaeger-collector unchanged
service/jaeger-collector-headless unchanged
service/jaeger-query unchanged
service/kiali unchanged
service/prometheus unchanged
service/tracing unchanged
service/zipkin unchanged
2020-04-06T20:00:40.421999Z info
Component IngressGateways - manifest apply returned the following errors:
2020-04-06T20:00:40.422056Z info Error: error running kubectl: exit status 1
2020-04-06T20:00:40.422096Z info Error detail:
Error from server (Timeout): error when creating "STDIN": Timeout: request did not complete within requested timeout 30s (repeated 2 times)
serviceaccount/istio-ingressgateway-service-account unchanged
deployment.apps/istio-ingressgateway configured
poddisruptionbudget.policy/ingressgateway unchanged
role.rbac.authorization.k8s.io/istio-ingressgateway-sds unchanged
rolebinding.rbac.authorization.k8s.io/istio-ingressgateway-sds unchanged
service/istio-ingressgateway unchanged
2020-04-06T20:00:40.422134Z info
✘ Errors were logged during apply operation. Please check component installation logs above.
Error: failed to apply manifests: errors were logged during apply operation
I ran the below to verify install before running the above commands.
istioctl verify-install
Checking the cluster to make sure it is ready for Istio installation...
#1. Kubernetes-api
-----------------------
Can initialize the Kubernetes client.
Can query the Kubernetes API Server.
#2. Kubernetes-version
-----------------------
Istio is compatible with Kubernetes: v1.16.7.
#3. Istio-existence
-----------------------
Istio will be installed in the istio-system namespace.
#4. Kubernetes-setup
-----------------------
Can create necessary Kubernetes configurations: Namespace,ClusterRole,ClusterRoleBinding,CustomResourceDefinition,Role,ServiceAccount,Service,Deployments,ConfigMap.
#5. SideCar-Injector
-----------------------
This Kubernetes cluster supports automatic sidecar injection. To enable automatic sidecar injection see https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#deploying-an-app
As mentioned in your logs
2020-04-06T19:59:24.951136Z info Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT.
As mentioned here
To determine if your cluster supports third party tokens, look for the TokenRequest API:
$ kubectl get --raw /api/v1 | jq '.resources[] | select(.name | index("serviceaccounts/token"))'
{
"name": "serviceaccounts/token",
"singularName": "",
"namespaced": true,
"group": "authentication.k8s.io",
"version": "v1",
"kind": "TokenRequest",
"verbs": [
"create"
]
}
While most cloud providers support this feature now, many local development tools and custom installations may not. To enable this feature, please refer to the Kubernetes documentation.
To authenticate with the Istio control plane, the Istio proxy will use a Service Account token. Kubernetes supports two forms of these tokens:
Third party tokens, which have a scoped audience and expiration.
First party tokens, which have no expiration and are mounted into all pods.
Because the properties of the first party token are less secure, Istio will default to using third party tokens. However, this feature is not enabled on all Kubernetes platforms.
If you are using istioctl to install, support will be automatically detected. This can be done manually as well, and configured by passing --set values.global.jwtPolicy=third-party-jwt or --set values.global.jwtPolicy=first-party-jwt.
If that won't work I would open a new github issue, or add a comment here as issue with installation is similar.

Intermittent DNS issues while pulling docker image from ECR repository

Has anyone facing this issue with docker pull. we recently upgraded docker to 18.03.1-ce from then we are seeing the issue. Although we are not exactly sure if this is related to docker, but just want to know if anyone faced this problem.
We have done some troubleshooting using tcp dump the DNS queries being made were under the permissible limit of 1024 packet. which is a limit on EC2, We also tried working around the issue by modifying the /etc/resolv.conf file to use a higher retry \ timeout value, but that didn't seem to help.
we did a packet capture line by line and found something. we found some responses to be negative. If you use Wireshark, you can use 'udp.stream eq 12' as a filter to view one of the negative answers. we can see the resolver sending an answer "No such name". All these requests that get a negative response use the following name in the request:
354XXXXX.dkr.ecr.us-east-1.amazonaws.com.ec2.internal
Would anyone of you happen to know why ec2.internal is being adding to the end of the DNS? If run a dig against this name it fails. So it appears that a wrong name is being sent to the server which responds with 'no such host'. Is docker is sending a wrong dns name for resolution.
We see this issue happening intermittently. looking forward for help. Thanks in advance.
Expected behaviour
5.0.25_61: Pulling from rrg
Digest: sha256:50bbce4af6749e9a976f0533c3b50a0badb54855b73d8a3743473f1487fd223e
Status: Downloaded newer image forXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/rrg:5.0.25_61
Actual behaviour
docker-compose up -d rrg-node-1
Creating rrg-node-1
ERROR: for rrg-node-1 Cannot create container for service rrg-node-1: Error response from daemon: Get https:/XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/v2/: dial tcp: lookup XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com on 10.5.0.2:53: no such host
Steps to reproduce the issue
docker pull XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/rrg:5.0.25_61
Output of docker version:
(Docker version 18.03.1-ce, build 3dfb8343b139d6342acfd9975d7f1068b5b1c3d3)
Output of docker info:
([ec2-user#ip-10-5-3-45 ~]$ docker info
Containers: 37
Running: 36
Paused: 0
Stopped: 1
Images: 60
Server Version: swarm/1.2.5
Role: replica
Primary: 10.5.4.172:3375
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 12
Plugins:
Volume:
Network:
Log:
Swarm:
NodeID:
Is Manager: false
Node Address:
Kernel Version: 4.14.51-60.38.amzn1.x86_64
Operating System: linux
Architecture: amd64
CPUs: 22
Total Memory: 80.85GiB
Name: mgr1
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
Experimental: false
Live Restore Enabled: false
WARNING: No kernel memory limit support)

Something wrong on deploy chaincode for hyperledger v1.0

I have tried to use docker toolbox to setup Hyperledger V1.0 in my local machines.
I according to this document:
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
But when I tried to deploy chaincode.
$node deploy.js
I got an error message:
info: Returning a new winston logger with default configurations
info: [Chain.js]: Constructed Chain instance: name - fabric-client1, securityEnabled: true, TCert download batch size: 10, network mode: true
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric COP service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a KeyValueStore to save keys, no store was passed in, using the default store C:\Users\daniel\.hfc-key-store
[2017-04-15 22:14:29.268] [ERROR] Helper - Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]
at ClientRequest.<anonymous> (C:\Users\daniel\node_modules\fabric-ca-client\lib\FabricCAClientImpl.js:304:12)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at Socket.socketErrorListener (_http_client.js:310:9)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1278:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[2017-04-15 22:14:29.273] [ERROR] DEPLOY - Error: Failed to obtain an enrolled user
at ca_client.enroll.then.then.then.catch (C:\Users\daniel\helper.js:59:12)
at process._tickCallback (internal/process/next_tick.js:103:7)
events.js:160
throw er; // Unhandled 'error' event
^
Error: Connect Failed
at ClientDuplexStream._emitStatusIfDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:201:19)
at ClientDuplexStream._readsDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:169:8)
at readCallback (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:229:12)
Is this an question about unable to connect to ca? Or other causes?
Edit:
Environment:
OS: Windows 10 Professional Edition
Docker Toolbox: 17.04.0-ce
Go: 1.7.5
Node.js: 6.10.0
My steps:
1.Open Docker Quickstart Terminal and key commands.
$curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null; tar -xvf sfhackfest.tar.gz
$docker-compose -f docker-compose-gettingstarted.yml build
$docker-compose -f docker-compose-gettingstarted.yml up -d
$docker ps
It has been confirmed that six containers have been activated
2.Download examples and install modules.
$curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
//This link didn't work, so I downloaded the required files from GitHub of fabric-sdk-node
$npm install --global windows-build-tools
$npm install
3.Try to deploy chaincode.
$node deploy.js
There were several problems, not the least of which that documentation was outdated and was for a preview release of Hyperledger Fabric. The docs are actually in the process of being removed as we need to update our examples / samples.
You mentioned Docker Toolbox - so are you trying to run all of this on Windows or Mac?
UPDATE:
So one of the issue with Docker Toolbox or Docker for Windows is that you cannot use localhost / 127.0.0.1 as the address when trying to communicate from apps on the host (even in the QuickStart Terminal) to the endpoints of the Docker containers. When the QuickStart Terminal first launches Docker, you'll see that it will output the IP address of the endpoint you should use when communicating with exposed ports.
I was having the same issue while following the latest "Writing Your First Application" tutorial (http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html). I had installed all the pre-requisites and the fabric-samples and started the local network.
When I got to the step of enrolling the Admin user, $ node enrollAdmin.js, I was getting the same error message as above, Error: connect ECONNREFUSED, followed by the localhost domain.
As the first answer suggests, the root cause is that I'm running Docker Toolbox. I'm developing on an older Mac, OSX v10.9.5, so I couldn't use Docker for Mac.
To fix the issue, I replaced 'localhost' in the enrollAdmin.js code with the IP from Docker Toolbox.
Here are the steps I took:
Started Docker with Applications > Docker Quickstart Terminal
Copied the IP from this sentence: docker is configured to use the default machine with IP...
Opened the copy of enrollAdmin.js from fabric-samples/fabcar directory
Found this code:
// be sure to change the http to https when the CA is running TLS enabled
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', tlsOptions , 'ca.example.com', crypto_suite); // <-- This is the line to change
Replaced 'localhost' with the Docker IP, leaving the port :7054 as is.
Saved
Re-ran the command, $ node enrollAdmin.js
The script connected to the CA and successfully completed the Admin enrollment.
On to the next step!

Kubeadm why does my node not show up though kubelet says it joined?

I am setting up a Kubernetes deployment using auto-scaling groups and Terraform. The kube master node is behind an ELB to get some reliability in case of something going wrong. The ELB has the health check set to tcp 6443, and tcp listeners for 8080, 6443, and 9898. All of the instances and the load balancer belong to a security group that allows all traffic between members of the group, plus public traffic from the NAT Gateway address. I created my AMI using the following script (from the getting started guide)...
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# apt-get update
# # Install docker if you don't have it already.
# apt-get install -y docker.io
# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
I use the following user data scripts...
kube master
#!/bin/bash
rm -rf /etc/kubernetes/*
rm -rf /var/lib/kubelet/*
kubeadm init \
--external-etcd-endpoints=http://${etcd_elb}:2379 \
--token=${token} \
--use-kubernetes-version=${k8s_version} \
--api-external-dns-names=kmaster.${master_elb_dns} \
--cloud-provider=aws
until kubectl cluster-info
do
sleep 1
done
kubectl apply -f https://git.io/weave-kube
kube node
#!/bin/bash
rm -rf /etc/kubernetes/*
rm -rf /var/lib/kubelet/*
until kubeadm join --token=${token} kmaster.${master_elb_dns}
do
sleep 1
done
Everything seems to work properly. The master comes up and responds to kubectl commands, with pods for discovery, dns, weave, controller-manager, api-server, and scheduler. kubeadm has the following output on the node...
Running pre-flight checks
<util/tokens> validating provided token
<node/discovery> created cluster info discovery client, requesting info from "http://kmaster.jenkins.learnvest.net:9898/cluster-info/v1/?token-id=eb31c0"
node/discovery> failed to request cluster info, will try again: [Get http://kmaster.jenkins.learnvest.net:9898/cluster-info/v1/?token-id=eb31c0: EOF]
<node/discovery> cluster info object received, verifying signature using given token
<node/discovery> cluster info signature and contents are valid, will use API endpoints [https://10.253.129.106:6443]
<node/bootstrap> trying to connect to endpoint https://10.253.129.106:6443
<node/bootstrap> detected server version v1.4.4
<node/bootstrap> successfully established connection with endpoint https://10.253.129.106:6443
<node/csr> created API client to obtain unique certificate for this node, generating keys and certificate signing request
<node/csr> received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:ip-10-253-130-44 | CA: false
Not before: 2016-10-27 18:46:00 +0000 UTC Not After: 2017-10-27 18:46:00 +0000 UTC
<node/csr> generating kubelet configuration
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
Unfortunately, running kubectl get nodes on the master only returns itself as a node. The only interesting thing I see in /var/log/syslog is
Oct 27 21:19:28 ip-10-252-39-25 kubelet[19972]: E1027 21:19:28.198736 19972 eviction_manager.go:162] eviction manager: unexpected err: failed GetNode: node 'ip-10-253-130-44' not found
Oct 27 21:19:31 ip-10-252-39-25 kubelet[19972]: E1027 21:19:31.778521 19972 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-10-253-130-44": nodes "ip-10-253-130-44" not found
I am really not sure where to look...
The Hostnames of the two machines (master and the node) should be different. You can check them by running cat /etc/hostname. If they do happen to be the same, edit that file to make them different and then do a sudo reboot to apply the changes. Otherwise kubeadm will not be able to differentiate between the two machines and it will show as a single one in kubectl get nodes.
Yes , I faced the same problem.
I resolved by:
killall kubelet
run the kubectl join command again
and start the kubelet service