Related
How can I solve this problem -- I am trying to run akka cluster on minikube. But failed to create a cluster.
17:46:49.093 [appka-akka.actor.default-dispatcher-12] WARN akka.management.cluster.bootstrap.internal.HttpContactPointBootstrap - Probing [http://172-17-0-3.default.pod.cluster.local:8558/bootstrap/seed-nodes] failed due to: Tcp command [Connect(172-17-0-3.default.pod.cluster.local:8558,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused
My config is --
akka {
actor {
provider = cluster
}
cluster {
shutdown-after-unsuccessful-join-seed-nodes = 60s
}
coordinated-shutdown.exit-jvm = on
management {
cluster.bootstrap {
contact-point-discovery {
discovery-method = kubernetes-api
}
}
}
}
my yaml
kind: Deployment
metadata:
labels:
app: appka
name: appka
spec:
replicas: 2
selector:
matchLabels:
app: appka
template:
metadata:
labels:
app: appka
spec:
containers:
- name: appka
image: akkacluster:latest
imagePullPolicy: Never
readinessProbe:
httpGet:
path: /ready
port: management
periodSeconds: 10
failureThreshold: 10
initialDelaySeconds: 20
livenessProbe:
httpGet:
path: /alive
port: management
periodSeconds: 10
failureThreshold: 10
initialDelaySeconds: 20
ports:
- name: management
containerPort: 8558
protocol: TCP
- name: http
containerPort: 8080
protocol: TCP
- name: remoting
containerPort: 25520
protocol: TCP
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
subjects:
- kind: User
name: system:serviceaccount:default:default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Unfortunately my cluster is not formaing---
kubectl logs pod/appka-7c4b7df7f7-5v7cc
17:46:32.026 [appka-akka.actor.default-dispatcher-3] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
SLF4J: A number (1) of logging calls during the initialization phase have been intercepted and are
SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system.
SLF4J: See also http://www.slf4j.org/codes.html#replay
17:46:33.644 [appka-akka.actor.default-dispatcher-3] INFO akka.remote.artery.tcp.ArteryTcpTransport - Remoting started with transport [Artery tcp]; listening on address [akka://appka#172.17.0.4:25520] with UID [-8421566647681174079]
17:46:33.811 [appka-akka.actor.default-dispatcher-3] INFO akka.cluster.Cluster - Cluster Node [akka://appka#172.17.0.4:25520] - Starting up, Akka version [2.6.14] ...
17:46:34.491 [appka-akka.actor.default-dispatcher-3] INFO akka.cluster.Cluster - Cluster Node [akka://appka#172.17.0.4:25520] - Registered cluster JMX MBean [akka:type=Cluster]
17:46:34.512 [appka-akka.actor.default-dispatcher-3] INFO akka.cluster.Cluster - Cluster Node [akka://appka#172.17.0.4:25520] - Started up successfully
17:46:34.883 [appka-akka.actor.default-dispatcher-3] INFO akka.cluster.Cluster - Cluster Node [akka://appka#172.17.0.4:25520] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing
17:46:34.884 [appka-akka.actor.default-dispatcher-3] INFO akka.cluster.Cluster - Cluster Node [akka://appka#172.17.0.4:25520] - No seed nodes found in configuration, relying on Cluster Bootstrap for joining
17:46:39.084 [appka-akka.actor.default-dispatcher-11] INFO akka.management.internal.HealthChecksImpl - Loading readiness checks [(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck), (sharding,akka.cluster.sharding.ClusterShardingHealthCheck)]
17:46:39.090 [appka-akka.actor.default-dispatcher-11] INFO akka.management.internal.HealthChecksImpl - Loading liveness checks []
17:46:39.104 [appka-akka.actor.default-dispatcher-3] INFO ClusterListenerActor$ - started actor akka://appka/user - (class akka.actor.typed.internal.adapter.ActorRefAdapter)
17:46:39.888 [appka-akka.actor.default-dispatcher-3] INFO akka.management.scaladsl.AkkaManagement - Binding Akka Management (HTTP) endpoint to: 172.17.0.4:8558
17:46:40.525 [appka-akka.actor.default-dispatcher-3] INFO akka.management.scaladsl.AkkaManagement - Including HTTP management routes for ClusterHttpManagementRouteProvider
17:46:40.806 [appka-akka.actor.default-dispatcher-3] INFO akka.management.scaladsl.AkkaManagement - Including HTTP management routes for ClusterBootstrap
17:46:40.821 [appka-akka.actor.default-dispatcher-3] INFO akka.management.cluster.bootstrap.ClusterBootstrap - Using self contact point address: http://172.17.0.4:8558
17:46:40.914 [appka-akka.actor.default-dispatcher-3] INFO akka.management.scaladsl.AkkaManagement - Including HTTP management routes for HealthCheckRoutes
17:46:44.198 [appka-akka.actor.default-dispatcher-3] INFO akka.management.cluster.bootstrap.ClusterBootstrap - Initiating bootstrap procedure using kubernetes-api method...
17:46:44.200 [appka-akka.actor.default-dispatcher-3] INFO akka.management.cluster.bootstrap.ClusterBootstrap - Bootstrap using `akka.discovery` method: kubernetes-api
17:46:44.226 [appka-akka.actor.default-dispatcher-3] INFO akka.management.scaladsl.AkkaManagement - Bound Akka Management (HTTP) endpoint to: 172.17.0.4:8558
17:46:44.487 [appka-akka.actor.default-dispatcher-6] INFO akka.management.cluster.bootstrap.internal.BootstrapCoordinator - Locating service members. Using discovery [akka.discovery.kubernetes.KubernetesApiServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider], scheme [http]
17:46:44.490 [appka-akka.actor.default-dispatcher-6] INFO akka.management.cluster.bootstrap.internal.BootstrapCoordinator - Looking up [Lookup(appka,None,Some(tcp))]
17:46:44.493 [appka-akka.actor.default-dispatcher-6] INFO akka.discovery.kubernetes.KubernetesApiServiceDiscovery - Querying for pods with label selector: [app=appka]. Namespace: [default]. Port: [None]
17:46:45.626 [appka-akka.actor.default-dispatcher-12] INFO akka.management.cluster.bootstrap.internal.BootstrapCoordinator - Looking up [Lookup(appka,None,Some(tcp))]
17:46:45.627 [appka-akka.actor.default-dispatcher-12] INFO akka.discovery.kubernetes.KubernetesApiServiceDiscovery - Querying for pods with label selector: [app=appka]. Namespace: [default]. Port: [None]
17:46:48.428 [appka-akka.actor.default-dispatcher-13] INFO akka.management.cluster.bootstrap.internal.BootstrapCoordinator - Located service members based on: [Lookup(appka,None,Some(tcp))]: [ResolvedTarget(172-17-0-4.default.pod.cluster.local,None,Some(/172.17.0.4)), ResolvedTarget(172-17-0-3.default.pod.cluster.local,None,Some(/172.17.0.3))], filtered to [172-17-0-4.default.pod.cluster.local:0, 172-17-0-3.default.pod.cluster.local:0]
17:46:48.485 [appka-akka.actor.default-dispatcher-22] INFO akka.management.cluster.bootstrap.internal.BootstrapCoordinator - Located service members based on: [Lookup(appka,None,Some(tcp))]: [ResolvedTarget(172-17-0-4.default.pod.cluster.local,None,Some(/172.17.0.4)), ResolvedTarget(172-17-0-3.default.pod.cluster.local,None,Some(/172.17.0.3))], filtered to [172-17-0-4.default.pod.cluster.local:0, 172-17-0-3.default.pod.cluster.local:0]
17:46:48.586 [appka-akka.actor.default-dispatcher-12] INFO akka.management.cluster.bootstrap.LowestAddressJoinDecider - Discovered [2] contact points, confirmed [0], which is less than the required [2], retrying
17:46:49.092 [appka-akka.actor.default-dispatcher-12] WARN akka.management.cluster.bootstrap.internal.HttpContactPointBootstrap - Probing [http://172-17-0-4.default.pod.cluster.local:8558/bootstrap/seed-nodes] failed due to: Tcp command [Connect(172-17-0-4.default.pod.cluster.local:8558,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused
17:46:49.093 [appka-akka.actor.default-dispatcher-12] WARN akka.management.cluster.bootstrap.internal.HttpContactPointBootstrap - Probing [http://172-17-0-3.default.pod.cluster.local:8558/bootstrap/seed-nodes] failed due to: Tcp command [Connect(172-17-0-3.default.pod.cluster.local:8558,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused
17:46:49.603 [appka-akka.actor.default-dispatcher-22] INFO akka.management.cluster.bootstrap.LowestAddressJoinDecider - Discovered [2] contact points, confirmed [0], which is less than the required [2], retrying
17:46:49.682 [appka-akka.actor.default-dispatcher-21] INFO akka.management.cluster.bootstrap.internal.BootstrapCoordinator - Looking up [Lookup(appka,None,Some(tcp))]
17:46:49.683 [appka-akka.actor.default-dispatcher-21] INFO akka.discovery.kubernetes.KubernetesApiServiceDiscovery - Querying for pods with label selector: [app=appka]. Namespace: [default]. Port: [None]
17:46:49.726 [appka-akka.actor.default-dispatcher-12] INFO akka.management.cluster.bootstrap.internal.BootstrapCoordinator - Located service members based on: [Lookup(appka,None,Some(tcp))]: [ResolvedTarget(172-17-0-4.default.pod.cluster.local,None,Some(/172.17.0.4)), ResolvedTarget(172-17-0-3.default.pod.cluster.local,None,Some(/172.17.0.3))], filtered to [172-17-0-4.default.pod.cluster.local:0, 172-17-0-3.default.pod.cluster.local:0]
17:46:50.349 [appka-akka.actor.default-dispatcher-21] WARN akka.management.cluster.bootstrap.internal.HttpContactPointBootstrap - Probing [http://172-17-0-3.default.pod.cluster.local:8558/bootstrap/seed-nodes] failed due to: Tcp command [Connect(172-17-0-3.default.pod.cluster.local:8558,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused
17:46:50.504 [appka-akka.actor.default-dispatcher-11] WARN akka.management.cluster.bootstrap.internal.HttpContactPointBootstrap - Probing [http://172-17-0-4.default.pod.cluster.local:8558/bootstrap/seed-nodes] failed due to: Tcp command [Connect(172-17-0-4.default.pod.cluster.local:8558,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused
You are missing akka.remote setting block. Something like:
akka {
actor {
# provider=remote is possible, but prefer cluster
provider = cluster
}
remote {
artery {
transport = tcp # See Selecting a transport below
canonical.hostname = "127.0.0.1"
canonical.port = 25520
}
}
}
OBSOLETE:
I keep this post for further reference, but you can check better diagnose (not solved yet, but workarounded) in
Istio: RequestAuthentication jwksUri does not resolve internal services names
UPDATE:
In Istio log we see the next error. uaa is a kubernetes pod serving OAUTH authentication/authorization. It is accessed with the name uaa from the normal services. I do not know why the istiod cannot find uaa host name. Have I to use an specific name? (remember, standard services find uaa host perfectly)
2021-03-03T18:39:36.750311Z error model Failed to fetch public key from "http://uaa:8090/uaa/token_keys": Get "http://uaa:8090/uaa/token_keys": dial tcp: lookup uaa on 10.96.0.10:53: no such host
2021-03-03T18:39:36.750364Z error Failed to fetch jwt public key from "http://uaa:8090/uaa/token_keys": Get "http://uaa:8090/uaa/token_keys": dial tcp: lookup uaa on 10.96.0.10:53: no such host
2021-03-03T18:39:36.753394Z info ads LDS: PUSH for node:product-composite-5cbf8498c7-jd4n5.chp18 resources:29 size:134.3kB
2021-03-03T18:39:36.754623Z info ads RDS: PUSH for node:product-composite-5cbf8498c7-jd4n5.chp18 resources:14 size:14.2kB
2021-03-03T18:39:36.790916Z warn ads ADS:LDS: ACK ERROR sidecar~10.1.1.56~product-composite-5cbf8498c7-jd4n5.chp18~chp18.svc.cluster.local-10 Internal:Error adding/updating listener(s) virtualInbound: Provider 'origins-0' in jwt_authn config has invalid local jwks: Jwks RSA [n] or [e] field is missing or has a parse error
2021-03-03T18:39:55.618106Z info ads ADS: "10.1.1.55:41162" sidecar~10.1.1.55~review-65b6886c89-bcv5f.chp18~chp18.svc.cluster.local-6 terminated rpc error: code = Canceled desc = context canceled
Original question
I have a service that is working fine, after injecting istio sidecar to a standard kubernetes pod.
I'm trying to add jwt Authentication, and for this, I'm following the official guide Authorization with JWT
My problem is
If I create the JWT resources (RequestAuthorization and AuthorizationPolicy) AFTER injecting the istio dependencies, everything (seems) to work fine
But if I create the JWT resources (RequestAuthorization and AuthorizationPolicy) and then inject the Istio the pod doesn't start. After checking the logs, seems that the sidecar is not able to work (maybe checking the health?)
My code:
JWT Resources
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "ra-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
jwtRules:
- issuer: "http://uaa:8090/uaa/oauth/token"
jwksUri: "http://uaa:8090/uaa/token_keys"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "ap-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
action: ALLOW
# rules:
# - from:
# - source:
# requestPrincipals: ["http://uaa:8090/uaa/oauth/token/faf5e647-74ab-42cc-acdb-13cc9c573d5d"]
# b99ccf71-50ed-4714-a7fc-e85ebae4a8bb
2- I use destination rules as follows
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: dr-product-composite
spec:
host: product-composite
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
3- My service deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-composite
spec:
replicas: 1
selector:
matchLabels:
app: product-composite
template:
metadata:
labels:
app: product-composite
version: latest
spec:
containers:
- name: comp
image: bthinking/product-composite-service
imagePullPolicy: Never
env:
- name: SPRING_PROFILES_ACTIVE
value: "docker"
- name: SPRING_CONFIG_LOCATION
value: file:/config-repo/application.yml,file:/config-repo/product-composite.yml
envFrom:
- secretRef:
name: rabbitmq-client-secrets
ports:
- containerPort: 80
resources:
limits:
memory: 350Mi
livenessProbe:
httpGet:
scheme: HTTP
path: /actuator/info
port: 4004
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 20
successThreshold: 1
readinessProbe:
httpGet:
scheme: HTTP
path: /actuator/health
port: 4004
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 3
successThreshold: 1
volumeMounts:
- name: config-repo-volume
mountPath: /config-repo
volumes:
- name: config-repo-volume
configMap:
name: config-repo-product-composite
---
apiVersion: v1
kind: Service
metadata:
name: product-composite
spec:
selector:
app: "product-composite"
ports:
- port: 80
name: http
targetPort: 80
- port: 4004
name: http-mgm
targetPort: 4004
4- Error log in the pod (combined service and sidecar)
2021-03-02 19:34:41.315 DEBUG 1 --- [undedElastic-12] o.s.s.w.s.a.AuthorizationWebFilter : Authorization successful
2021-03-02 19:34:41.315 DEBUG 1 --- [undedElastic-12] .b.a.e.w.r.WebFluxEndpointHandlerMapping : [0e009bf1-133] Mapped to org.springframework.boot.actuate.endpoint.web.reactive.AbstractWebFluxEndpointHandlerMapping$ReadOperationHandler#e13aa23
2021-03-02 19:34:41.316 DEBUG 1 --- [undedElastic-12] ebSessionServerSecurityContextRepository : No SecurityContext found in WebSession: 'org.springframework.web.server.session.InMemoryWebSessionStore$InMemoryWebSession#48e89a58'
2021-03-02 19:34:41.319 DEBUG 1 --- [undedElastic-15] .s.w.r.r.m.a.ResponseEntityResultHandler : [0e009bf1-133] Using 'application/vnd.spring-boot.actuator.v3+json' given [*/*] and supported [application/vnd.spring-boot.actuator.v3+json, application/vnd.spring-boot.actuator.v2+json, application/json]
2021-03-02 19:34:41.320 DEBUG 1 --- [undedElastic-15] .s.w.r.r.m.a.ResponseEntityResultHandler : [0e009bf1-133] 0..1 [java.util.Collections$UnmodifiableMap<?, ?>]
2021-03-02 19:34:41.321 DEBUG 1 --- [undedElastic-15] o.s.http.codec.json.Jackson2JsonEncoder : [0e009bf1-133] Encoding [{}]
2021-03-02 19:34:41.326 DEBUG 1 --- [or-http-epoll-3] r.n.http.server.HttpServerOperations : [id: 0x0e009bf1, L:/127.0.0.1:4004 - R:/127.0.0.1:57138] Detected non persistent http connection, preparing to close
2021-03-02 19:34:41.327 DEBUG 1 --- [or-http-epoll-3] o.s.w.s.adapter.HttpWebHandlerAdapter : [0e009bf1-133] Completed 200 OK
2021-03-02 19:34:41.327 DEBUG 1 --- [or-http-epoll-3] r.n.http.server.HttpServerOperations : [id: 0x0e009bf1, L:/127.0.0.1:4004 - R:/127.0.0.1:57138] Last HTTP response frame
2021-03-02 19:34:41.328 DEBUG 1 --- [or-http-epoll-3] r.n.http.server.HttpServerOperations : [id: 0x0e009bf1, L:/127.0.0.1:4004 - R:/127.0.0.1:57138] Last HTTP packet was sent, terminating the channel
2021-03-02T19:34:41.871551Z warn Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected
5- Istio injection
kubectl get deployment product-composite -o yaml | istioctl kube-inject -f - | kubectl apply -f -
NOTICE: I have checked a lot of post in SO, and it seems that health checking create a lot of problems with sidecars and other configurations. I have checked the guide Health Checking of Istio Services with no success. Specifically, I tried to disable the sidecar.istio.io/rewriteAppHTTPProbers: "false", but it is worse (in this case, doesn't start neither the sidecar neither the service.
I have EKS cluster with EBS storage class/volume.
I am able to deploy hdfs namenode and datanode images (bde2020/hadoop-xxx) using statefulset successfully.
When I am trying to put a file to hdfs from my machine using hdfs://:, it gives me success, but it does not get written on datanode.
In namenode log, I see below error.
Can it be something to do with EBS volume? I cannot even upload/download files from namenode GUI. Can it be due to as datanode host name hdfs-data-X.hdfs-data.pulse.svc.cluster.local is not resolvable to my local machine?
Please help
2020-05-12 17:38:51,360 INFO hdfs.StateChange: BLOCK* allocate blk_1073741825_1001, replicas=10.8.29.112:9866, 10.8.29.176:9866, 10.8.29.188:9866 for /vault/a.json
2020-05-12 17:39:13,036 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-05-12 17:39:13,036 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-05-12 17:39:13,036 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-05-12 17:39:13,036 INFO hdfs.StateChange: BLOCK* allocate blk_1073741826_1002, replicas=10.8.29.176:9866, 10.8.29.188:9866 for /vault/a.json
2020-05-12 17:39:34,607 INFO namenode.FSEditLog: Number of transactions: 11 Total time for transactions(ms): 23 Number of transactions batched in Syncs: 3 Number of syncs: 8 SyncTimes(ms): 23
2020-05-12 17:39:35,146 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-05-12 17:39:35,146 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-05-12 17:39:35,146 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-05-12 17:39:35,147 INFO hdfs.StateChange: BLOCK* allocate blk_1073741827_1003, replicas=10.8.29.188:9866 for /vault/a.json
2020-05-12 17:39:57,319 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2020-05-12 17:39:57,319 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 3 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2020-05-12 17:39:57,319 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-05-12 17:39:57,320 INFO ipc.Server: IPC Server handler 5 on default port 8020, call Call#12 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.254.40.95:59328
java.io.IOException: File /vault/a.json could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)
My namenode web page shows below:
Node Http Address Last contact Last Block Report Capacity Blocks Block pool used Version
hdfs-data-0.hdfs-data.pulse.svc.cluster.local:9866 http://hdfs-data-0.hdfs-data.pulse.svc.cluster.local:9864 1s 0m
975.9 MB
0 24 KB (0%) 3.2.1
hdfs-data-1.hdfs-data.pulse.svc.cluster.local:9866 http://hdfs-data-1.hdfs-data.pulse.svc.cluster.local:9864 2s 0m
975.9 MB
0 24 KB (0%) 3.2.1
hdfs-data-2.hdfs-data.pulse.svc.cluster.local:9866 http://hdfs-data-2.hdfs-data.pulse.svc.cluster.local:9864 1s 0m
975.9 MB
0 24 KB (0%) 3.2.1
My deployment:
NameNode:
#clusterIP service of namenode
apiVersion: v1
kind: Service
metadata:
name: hdfs-name
namespace: pulse
labels:
component: hdfs-name
spec:
ports:
- port: 8020
protocol: TCP
name: nn-rpc
- port: 9870
protocol: TCP
name: nn-web
selector:
component: hdfs-name
type: ClusterIP
---
#namenode stateful deployment
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: hdfs-name
namespace: pulse
labels:
component: hdfs-name
spec:
serviceName: hdfs-name
replicas: 1
selector:
matchLabels:
component: hdfs-name
template:
metadata:
labels:
component: hdfs-name
spec:
initContainers:
- name: delete-lost-found
image: busybox
command: ["sh", "-c", "rm -rf /hadoop/dfs/name/lost+found"]
volumeMounts:
- name: hdfs-name-pv-claim
mountPath: /hadoop/dfs/name
containers:
- name: hdfs-name
image: bde2020/hadoop-namenode
env:
- name: CLUSTER_NAME
value: hdfs-k8s
- name: HDFS_CONF_dfs_permissions_enabled
value: "false"
ports:
- containerPort: 8020
name: nn-rpc
- containerPort: 9870
name: nn-web
volumeMounts:
- name: hdfs-name-pv-claim
mountPath: /hadoop/dfs/name
#subPath: data #subPath required as on root level, lost+found folder is created which does not cause to run namenode --format
volumeClaimTemplates:
- metadata:
name: hdfs-name-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: ebs
resources:
requests:
storage: 1Gi
Datanode:
#headless service of datanode
apiVersion: v1
kind: Service
metadata:
name: hdfs-data
namespace: pulse
labels:
component: hdfs-data
spec:
ports:
- port: 80
protocol: TCP
selector:
component: hdfs-data
clusterIP: None
type: ClusterIP
---
#datanode stateful deployment
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: hdfs-data
namespace: pulse
labels:
component: hdfs-data
spec:
serviceName: hdfs-data
replicas: 3
selector:
matchLabels:
component: hdfs-data
template:
metadata:
labels:
component: hdfs-data
spec:
containers:
- name: hdfs-data
image: bde2020/hadoop-datanode
env:
- name: CORE_CONF_fs_defaultFS
value: hdfs://hdfs-name:8020
volumeMounts:
- name: hdfs-data-pv-claim
mountPath: /hadoop/dfs/data
volumeClaimTemplates:
- metadata:
name: hdfs-data-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: ebs
resources:
requests:
storage: 1Gi
It seems to be issue with the datanode not reachable over rpc port from my client machine.
I had datanodes http port reachable from my client machine. Tried using webhdfs:// (instead of hdfs://) after putting mapping of datanode podname vs IP in hosts file, it worked out.
Problem
We run Istio on our Kubernetes cluster and we're implementing AuthorizationPolicies.
We want to apply a filter on email address, an HTTP-condition only applicable to HTTP services.
Our Kiali service should be an HTTP service (it has an HTTP port, an HTTP listener, and even has HTTP conditions applied to its filters), and yet the AuthorizationPolicy does not work.
What gives?
Our setup
We have a management namespace with an ingressgateway (port 443), and a gateway+virtual service for Kiali.
These latter two point to the Kiali service in the kiali namespace.
Both the management and kiali namespace have a deny-all policy and an allow policy to make an exception for particular users.
(See AuthorizationPolicy YAMLs below.)
Authorization on the management ingress gateway works.
The ingress gateway has 3 listeners, all HTTP, and HTTP conditions are created and applied as you would expect.
You can visit its backend services other than Kiali if you're on the email list, and you cannot do so if you're not on the email list.
Authorization on the Kiali service does not work.
It has 99 listeners (!), including an HTTP listener on its configured 20001 port and its IP, but it does not work.
You cannot visit the Kiali service (due to the default deny-all policy).
The Kiali service has port 20001 enabled and named 'http-kiali', so the VirtualService should be ok with that. (See YAMls for service and virtual service below).
EDIT: it was suggested that the syntax of the email values matters.
I think that has been taken care of:
in the management namespace, the YAML below works as expected
in the kiali namespace, the same YAML fails to work as expected.
the empty brackets in the 'property(map[request.auth.claims[email]:{[brackets#test.com] []}])' message I think are the Values (present) and NotValues (absent), respectively, as per 'constructed internal model: &{Permissions:[{Properties:[map[request.auth.claims[email]:{Values:[brackets#test.com] NotValues:[]}]]}]}'
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: testpolicy-brackets
namespace: kiali
spec:
action: ALLOW
rules:
- when:
- key: source.namespace
values: ["brackets"]
- key: request.auth.claims[email]
values: ["brackets#test.com"]
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: testpolicy-yamllist
namespace: kiali
spec:
action: ALLOW
rules:
- when:
- key: source.namespace
values:
- list
- key: request.auth.claims[email]
values:
- list#test.com
debug rbac found authorization allow policies for workload [app=kiali,pod-template-hash=5c97c4bb66,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=kiali,service.istio.io/canonical-revision=v1.16.0,version=v1.16.0] in kiali
debug rbac constructed internal model: &{Permissions:[{Services:[] Hosts:[] NotHosts:[] Paths:[] NotPaths:[] Methods:[] NotMethods:[] Ports:[] NotPorts:[] Constraints:[] AllowAll:true v1beta1:true}] Principals:[{Users:[] Names:[] NotNames:[] Group: Groups:[] NotGroups:[] Namespaces:[] NotNamespaces:[] IPs:[] NotIPs:[] RequestPrincipals:[] NotRequestPrincipals:[] Properties:[map[source.namespace:{Values:[brackets] NotValues:[]}] map[request.auth.claims[email]:{Values:[brackets#test.com] NotValues:[]}]] AllowAll:false v1beta1:true}]}
debug rbac generated policy ns[kiali]-policy[testpolicy-brackets]-rule[0]: permissions:<and_rules:<rules:<any:true > > > principals:<and_ids:<ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"source.principal" > value:<string_match:<safe_regex:<google_re2:<> regex:".*/ns/brackets/.*" > > > > > > > ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"request.auth.claims" > path:<key:"email" > value:<list_match:<one_of:<string_match:<exact:"brackets#test.com" > > > > > > > > > >
debug rbac ignored HTTP principal for TCP service: property(map[request.auth.claims[email]:{[brackets#test.com] []}])
debug rbac role skipped for no principals found
debug rbac found authorization allow policies for workload [app=kiali,pod-template-hash=5c97c4bb66,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=kiali,service.istio.io/canonical-revision=v1.16.0,version=v1.16.0] in kiali
debug rbac constructed internal model: &{Permissions:[{Services:[] Hosts:[] NotHosts:[] Paths:[] NotPaths:[] Methods:[] NotMethods:[] Ports:[] NotPorts:[] Constraints:[] AllowAll:true v1beta1:true}] Principals:[{Users:[] Names:[] NotNames:[] Group: Groups:[] NotGroups:[] Namespaces:[] NotNamespaces:[] IPs:[] NotIPs:[] RequestPrincipals:[] NotRequestPrincipals:[] Properties:[map[source.namespace:{Values:[list] NotValues:[]}] map[request.auth.claims[email]:{Values:[list#test.com] NotValues:[]}]] AllowAll:false v1beta1:true}]}
debug rbac generated policy ns[kiali]-policy[testpolicy-yamllist]-rule[0]: permissions:<and_rules:<rules:<any:true > > > principals:<and_ids:<ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"source.principal" > value:<string_match:<safe_regex:<google_re2:<> regex:".*/ns/list/.*" > > > > > > > ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"request.auth.claims" > path:<key:"email" > value:<list_match:<one_of:<string_match:<exact:"list#test.com" > > > > > > > > > >
debug rbac ignored HTTP principal for TCP service: property(map[request.auth.claims[email]:{[list#test.com] []}])
debug rbac role skipped for no principals found
(Follows: a list of YAMLs mentioned above)
# Cluster AuthorizationPolicies
## Management namespace
Name: default-deny-all-policy
Namespace: management
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
---
Name: allow-specified-email-addresses
Namespace: management
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
Action: ALLOW
Rules:
When:
Key: request.auth.claims[email]
Values:
my.email#my.provider.com
---
## Kiali namespace
Name: default-deny-all-policy
Namespace: kiali
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
---
Name: allow-specified-email-addresses
Namespace: kiali
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
Action: ALLOW
Rules:
When:
Key: request.auth.claims[email]
Values:
my.email#my.provider.com
---
# Kiali service YAML
apiVersion: v1
kind: Service
metadata:
labels:
app: kiali
version: v1.16.0
name: kiali
namespace: kiali
spec:
clusterIP: 10.233.18.102
ports:
- name: http-kiali
port: 20001
protocol: TCP
targetPort: 20001
selector:
app: kiali
version: v1.16.0
sessionAffinity: None
type: ClusterIP
---
# Kiali VirtualService YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: kiali-virtualservice
namespace: management
spec:
gateways:
- kiali-gateway
hosts:
- our_external_kiali_url
http:
- match:
- uri:
prefix: /
route:
- destination:
host: kiali.kiali.svc.cluster.local
port:
number: 20001
Marking as solved: I had forgotten to apply a RequestAuthentication to the Kiali namespace.
The problematic situation, with the fix in bold:
RequestAuthentication on the management namespace adds a user JWT (through an EnvoyFilter that forwards requests to an authentication service)
AuthorizationPolicy on the management namespace checks the request.auth.claims[email]. These fields exist in the JWT and all is well.
RequestAuthentication on the Kiali namespace missing. I fixed the problem by adding a RequestAuthentication for the Kiali namespace, which populates the user information, which allows the AuthorizationPolicy to perform its checks on actually existing fields.
AuthorizationPolicy on the Kiali namespace also checks the request.auth.claims[email] field, but since there is no authentication, there is no JWT with these fields. (There are some fields populated, e.g. source.namespace, but nothing like a JWT.) Hence, user validation on that field fails, as you would expect.
According to istio documentation:
Unsupported keys and values are silently ignored.
In your debug log there is:
debug rbac ignored HTTP principal for TCP service: property(map[request.auth.claims[email]:{[my.email#my.provider.com] []}])
As You can see there are []}] chars there that might suggest that the value got parsed the wrong way and got ignored as unsupported value.
Try to put Your values like suggested in documentation inside [""]:
request.auth.claims
Claims from the origin JWT. The actual claim name is surrounded by brackets HTTP only
key: request.auth.claims[iss]
values: ["*#foo.com"]
Hope it helps.
I am trying to filter access to external ressources. I have created a service entry
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: bbc-ext
spec:
hosts:
- "www.bbc.co.uk"
ports:
- number: 443
name: https
protocol: HTTPS
I am using sourceLabel to filter the source app allowed to access external ressources.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bbc-ext
spec:
hosts:
- "www.bbc.co.uk"
http:
- match:
- sourceLabels:
envir: "production"
route:
- destination:
host: "www.bbc.co.uk"
weight: 100
- route:
- destination:
host: "www.bbc.co.uk"
fault:
abort:
percent: 100
httpStatus: 400
My pod is labeled envir=development but is still allowed access to the ressource.
kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sleep-d7bfccf65-ws6t6 2/2 Running 0 16m app=sleep,envir=development,pod-template-hash=836977921
But, when I log in the container and run a curl request it is still valid. What am I doing wrong here?
kubectl exec -it sleep-d7bfccf65-ws6t6 -c sleep bash
root#sleep-d7bfccf65-ws6t6:/# curl -v -sL https://www.bbc.co.uk -w "%{http_code}\n" -o /dev/null
[...]
< Cache-Control: private, max-age=0, must-revalidate
< Vary: Accept-Encoding, X-CDN, X-BBC-Edge-Scheme
<
{ [data not shown]
* Connection #0 to host www.bbc.co.uk left intact
200
still the same.
also noticed that sync is not working for routes.
istioctl proxy-status
PROXY CDS LDS EDS RDS PILOT
istio-egressgateway-6cb5b78857-cvqfz.istio-system SYNCED SYNCED SYNCED (100%) NOT SENT istio-pilot-56f6487cdb-qlhzr
istio-ingressgateway-5766b9cc69-64bgd.istio-system SYNCED SYNCED SYNCED (100%) NOT SENT istio-pilot-56f6487cdb-qlhzr
sleep-86f6b99f94-n8l8r.production SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-56f6487cdb-qlhzr
sleep-d7bfccf65-qbs7v.development SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-56f6487cdb-qlhzr